Skip to content

BrainLearn - Educational Video Generator

Status: Design Phase Type: New Application Stack: Next.js 16 + React 19 + Tailwind v4 + Supabase + Remotion

Overview

BrainLearn generates short-form educational videos in the "brainrot" style (TikTok/Reels format) from infrastructure documentation. It features original character dialogues, synchronized subtitles, and gameplay backgrounds to make learning engaging.

Primary Use Case: Transform /root/tower-fleet/docs/ content into digestible video lessons for personal learning, with optional social media publishing.

Core Concept

┌─────────────────┐     ┌──────────────────┐     ┌─────────────────┐
│  Docs Ingestion │────▶│  AI Dialogue Gen │────▶│  Video Render   │
│  (Markdown)     │     │  (Q&A Format)    │     │  (Remotion)     │
└─────────────────┘     └──────────────────┘     └─────────────────┘
┌─────────────────┐     ┌──────────────────┐     ┌─────────────────┐
│  Social Upload  │◀────│  Review Queue    │◀────│  Preview/Edit   │
│  (Manual/Auto)  │     │  (Approve/Reject)│     │  (Web UI)       │
└─────────────────┘     └──────────────────┘     └─────────────────┘

Design Decisions

Characters: Original Mascots

To avoid IP issues and enable public posting, we'll create original characters:

Character Role Voice Style Visual
Kube The Teacher Calm, methodical explainer Blue robot/mascot
Proxy The Student Curious, asks "dumb" questions Orange fox/gremlin

Dialogue Format: - Proxy asks a beginner question (what users might wonder) - Kube explains in simple terms - Proxy follows up or summarizes - Back-and-forth creates engagement

Voice: Local TTS (Coqui/Piper)

Choice: Self-hosted TTS for zero ongoing costs

Options to evaluate: 1. Piper TTS - Fast, lightweight, good quality voices 2. Coqui TTS - More natural, supports voice cloning if needed later 3. Bark - Most natural but slower, good for final renders

Implementation: - Run TTS service in dedicated container - Pre-generate voice profiles for each character - API endpoint: POST /api/tts { text, character } → audio file

Video Rendering: Dedicated LXC

Rationale: Simplicity over speed. 15-20 min render time is acceptable for batch generation.

Container Spec: - CTID: 180 (next available in sequence) - Resources: 8 cores, 8GB RAM, 50GB disk - Stack: Node.js 20, ffmpeg, Chromium (for Remotion) - Network: Internal only (10.89.97.x)

Rendering Pipeline: 1. Web app queues render job to Supabase 2. Worker container polls for jobs 3. Remotion generates video (React → MP4) 4. Upload to Supabase Storage 5. Mark job complete, notify web UI

Content Source: Infrastructure Docs

Ingestion Strategy:

/root/tower-fleet/docs/
├── getting-started/     → "What is..." basics
├── workflows/           → "How to..." tutorials
├── reference/           → Deep dives, commands
└── applications/        → App-specific content

Topic Extraction: 1. Parse markdown files 2. Extract headings as potential topics 3. AI generates Q&A dialogue from content 4. Store in Supabase with source reference

Generation Pipeline: Review Before Publish

Workflow: 1. Topic Selection - Browse ingested topics or request new 2. Dialogue Generation - AI creates character script 3. Script Review - Edit dialogue before render 4. Video Render - Queue for processing 5. Preview - Watch generated video in browser 6. Approve/Reject - Move to publish queue or regenerate 7. Publish - Manual upload or scheduled auto-post

Architecture

System Components

┌─────────────────────────────────────────────────────────────┐
│                        Kubernetes Cluster                    │
│  ┌─────────────────┐  ┌─────────────────┐                   │
│  │   BrainLearn    │  │    Supabase     │                   │
│  │   (Next.js)     │◀─│   (Shared K8s)  │                   │
│  │   Ingress:      │  │                 │                   │
│  │   learn.internal│  │   Schema:       │                   │
│  └────────┬────────┘  │   brain_learn   │                   │
│           │           └─────────────────┘                   │
└───────────┼─────────────────────────────────────────────────┘
            ▼ Job Queue (Supabase)
┌───────────────────────┐
│   LXC 180: Renderer   │
│   ┌─────────────────┐ │
│   │  Render Worker  │ │
│   │  (Node.js)      │ │
│   ├─────────────────┤ │
│   │  Piper TTS      │ │
│   │  (Python)       │ │
│   ├─────────────────┤ │
│   │  Remotion       │ │
│   │  (Chromium)     │ │
│   └─────────────────┘ │
└───────────────────────┘

Database Schema (brain_learn)

-- Ingested documentation topics
CREATE TABLE topics (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  title TEXT NOT NULL,
  source_file TEXT NOT NULL,          -- docs path
  source_section TEXT,                 -- heading anchor
  content TEXT NOT NULL,               -- extracted markdown
  difficulty TEXT CHECK (difficulty IN ('beginner', 'intermediate', 'advanced')),
  tags TEXT[],
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Generated dialogues/scripts
CREATE TABLE scripts (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  topic_id UUID REFERENCES topics(id),
  dialogue JSONB NOT NULL,            -- [{character, text, duration_hint}]
  status TEXT DEFAULT 'draft',        -- draft, approved, rendered
  version INT DEFAULT 1,
  created_at TIMESTAMPTZ DEFAULT NOW(),
  updated_at TIMESTAMPTZ DEFAULT NOW()
);

-- Render jobs queue
CREATE TABLE render_jobs (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  script_id UUID REFERENCES scripts(id),
  status TEXT DEFAULT 'pending',      -- pending, processing, complete, failed
  background_type TEXT DEFAULT 'minecraft', -- minecraft, subway, custom
  output_url TEXT,                    -- Supabase Storage URL
  duration_seconds INT,
  error_message TEXT,
  started_at TIMESTAMPTZ,
  completed_at TIMESTAMPTZ,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Published/scheduled videos
CREATE TABLE publications (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  render_job_id UUID REFERENCES render_jobs(id),
  platform TEXT NOT NULL,             -- tiktok, reels, youtube_shorts
  status TEXT DEFAULT 'pending',      -- pending, scheduled, published, failed
  scheduled_for TIMESTAMPTZ,
  published_at TIMESTAMPTZ,
  external_url TEXT,                  -- URL on platform after publish
  created_at TIMESTAMPTZ DEFAULT NOW()
);

API Routes

/api/
├── topics/
│   ├── GET    /              - List all topics
│   ├── POST   /ingest        - Trigger docs ingestion
│   └── GET    /:id           - Get topic details
├── scripts/
│   ├── GET    /              - List scripts
│   ├── POST   /generate      - Generate dialogue from topic
│   ├── PUT    /:id           - Update/edit script
│   └── POST   /:id/approve   - Mark ready for render
├── render/
│   ├── GET    /queue         - View render queue
│   ├── POST   /              - Queue new render job
│   └── GET    /:id/status    - Check render status
├── videos/
│   ├── GET    /              - List rendered videos
│   ├── GET    /:id           - Get video details + preview
│   └── DELETE /:id           - Remove video
└── publish/
    ├── GET    /queue         - Publication queue
    ├── POST   /schedule      - Schedule for publishing
    └── POST   /:id/publish   - Trigger immediate publish

UI Design

Pages

  1. Dashboard (/)
  2. Stats: topics ingested, scripts pending, videos ready
  3. Quick actions: ingest docs, generate random topic
  4. Recent activity feed

  5. Topics (/topics)

  6. Browse ingested documentation topics
  7. Filter by difficulty, tags, source file
  8. Click to generate script

  9. Scripts (/scripts)

  10. List of generated dialogues
  11. Edit dialogue in-place
  12. Preview text-to-speech
  13. Approve for rendering

  14. Render Queue (/render)

  15. Active and completed render jobs
  16. Progress indicators
  17. Error logs for failed renders

  18. Videos (/videos)

  19. Grid of rendered videos with thumbnails
  20. In-browser video preview
  21. Approve/reject for publishing
  22. Download option

  23. Publish (/publish)

  24. Publication queue and schedule
  25. Platform connections (TikTok, Instagram)
  26. Analytics (if available from APIs)

Component Library

Standard shadcn/ui components: - DataTable for topics/scripts/videos lists - Dialog for script editing - Tabs for filtering - Progress for render status - Video player (custom or react-player)

Character System

Sprite Assets

/public/characters/
├── kube/
│   ├── idle.png           # Default pose
│   ├── talking-1.png      # Mouth open
│   ├── talking-2.png      # Mouth closed
│   ├── thinking.png       # Hand on chin
│   └── excited.png        # Gesturing
├── proxy/
│   ├── idle.png
│   ├── talking-1.png
│   ├── talking-2.png
│   ├── confused.png       # Question mark
│   └── enlightened.png    # Lightbulb moment
└── backgrounds/
    ├── minecraft/         # Parkour gameplay clips
    └── subway/            # Subway surfer clips

Animation Logic (Remotion)

// Simplified character component
const Character: React.FC<{
  character: 'kube' | 'proxy';
  speaking: boolean;
  frame: number;
}> = ({ character, speaking, frame }) => {
  // Alternate mouth frames when speaking
  const mouthFrame = speaking ? (Math.floor(frame / 4) % 2) + 1 : 0;
  const sprite = speaking
    ? `/characters/${character}/talking-${mouthFrame}.png`
    : `/characters/${character}/idle.png`;

  return (
    <Img src={sprite} style={{ position: 'absolute', bottom: 0 }} />
  );
};

Dialogue Format

{
  "dialogue": [
    {
      "character": "proxy",
      "text": "Hey Kube, what's a Kubernetes pod?",
      "emotion": "confused",
      "duration_hint": 3
    },
    {
      "character": "kube",
      "text": "Great question! A pod is the smallest deployable unit in Kubernetes. Think of it like a wrapper around one or more containers.",
      "emotion": "explaining",
      "duration_hint": 8
    },
    {
      "character": "proxy",
      "text": "Oh, so it's like a container for containers?",
      "emotion": "thinking",
      "duration_hint": 3
    },
    {
      "character": "kube",
      "text": "Exactly! And pods in the same node can share storage and network.",
      "emotion": "excited",
      "duration_hint": 5
    }
  ]
}

Implementation Phases

Phase 1: Foundation

  • [ ] Create LXC 180 for rendering
  • [ ] Set up Next.js app with standard skeleton
  • [ ] Configure Supabase schema (brain_learn)
  • [ ] Basic UI scaffold with shadcn

Phase 2: Content Pipeline

  • [ ] Docs ingestion script (markdown → topics)
  • [ ] AI dialogue generation (Claude API)
  • [ ] Script editor UI
  • [ ] Local TTS integration (Piper)

Phase 3: Video Generation

  • [ ] Remotion project setup
  • [ ] Character sprite system
  • [ ] Subtitle rendering with word-level timing
  • [ ] Background video overlay
  • [ ] Render worker service

Phase 4: Review & Publish

  • [ ] Video preview player
  • [ ] Approval workflow
  • [ ] Download functionality
  • [ ] (Optional) Social media API integration

Tech Stack Details

Dependencies

{
  "dependencies": {
    "next": "^16.0.0",
    "react": "^19.0.0",
    "@remotion/core": "^4.x",
    "@remotion/cli": "^4.x",
    "@remotion/player": "^4.x",
    "@supabase/supabase-js": "^2.x",
    "@supabase/ssr": "^0.5.x",
    "ai": "^4.x",
    "@ai-sdk/anthropic": "^1.x"
  }
}

Render Worker

Separate Node.js process in LXC 180:

// worker/index.ts
import { Supabase } from './supabase';
import { renderVideo } from './remotion';
import { generateAudio } from './tts';

async function processJob(job: RenderJob) {
  // 1. Generate TTS audio for each dialogue line
  const audioFiles = await generateAudio(job.script.dialogue);

  // 2. Calculate timing from audio durations
  const timing = calculateTiming(audioFiles);

  // 3. Render video with Remotion
  const outputPath = await renderVideo({
    dialogue: job.script.dialogue,
    audioFiles,
    timing,
    background: job.background_type
  });

  // 4. Upload to Supabase Storage
  const url = await uploadVideo(outputPath);

  // 5. Update job status
  await updateJobComplete(job.id, url);
}

Cost Analysis

One-time Setup

  • Character art commission: $100-300 (or generate with AI)
  • Background gameplay clips: Free (Creative Commons)

Ongoing Costs

  • Compute: $0 (self-hosted LXC)
  • TTS: $0 (Piper is local/free)
  • AI Dialogue: ~$0.01-0.05 per script (Claude API)
  • Storage: Supabase free tier (1GB) or self-hosted

Estimated monthly: <$5 for occasional dialogue generation

Open Questions

  1. Character Design: Commission artist, use AI generation, or simple geometric mascots?
  2. Background Videos: Source from Creative Commons, record gameplay, or generate?
  3. Social Media APIs: Worth automating or just manual upload?
  4. Voice Training: Start with default Piper voices or train custom from the start?

References