Skip to content

Development Environment Migration: LXC to Host-Based

Status: Complete Created: 2024-12-03 Target: Simplify development by eliminating LXC containers for Next.js apps


Executive Summary

Migrate from per-app LXC containers to running Next.js development directly on the Proxmox host. This eliminates permission issues, reduces resource overhead, and simplifies the development workflow.

Current State

  • Each app runs in its own LXC container (150, 160, 170, 190, etc.)
  • Docker-in-LXC runs local Supabase instances
  • /root/projects/* are symlinks to LXC filesystems
  • Complex 8-stage app creation workflow with permission fixes

Target State

  • All apps run directly on host in /root/projects/
  • K8s Supabase sandbox (supabase-sbx) for development
  • K8s Supabase (supabase) for production
  • Simple 5-step app creation workflow

Architecture Comparison

Before (LXC-Based)

Proxmox Host
├── /root/projects/money-tracker -> //rpool/data/subvol-150-disk-0/root/money-tracker
├── LXC 150 (money-tracker)
│   ├── Node.js 20
│   ├── Docker
│   │   └── Supabase containers (local dev)
│   └── /root/money-tracker (app code)
├── LXC 160 (home-portal)
│   └── ... same pattern
└── K8s Cluster
    └── supabase namespace (production)

Problems: - Triple containerization (LXC → Docker → Supabase containers) - UID 100000 permission mapping issues - 4GB RAM per LXC even when idle - Docker-in-unprivileged-LXC has syscall restrictions - Complex scaffold process with container path handling

After (Host-Based)

Proxmox Host
├── /root/projects/
│   ├── money-tracker/     (actual directory)
│   ├── home-portal/       (actual directory)
│   ├── trip-planner/      (actual directory)
│   └── rms/               (actual directory)
├── Node.js 22 (already installed)
└── K8s Cluster
    ├── supabase-sbx namespace (development)
    └── supabase namespace (production)

Benefits: - No permission issues (Claude Code edits files directly) - No container overhead for dev - Simpler tooling (npm run dev just works) - K8s Supabase already handles multi-app via schema isolation


Supabase Environment Strategy

Environment Namespace URL Purpose
Sandbox supabase-sandbox http://10.89.97.221:8000 Development, migration testing, experiments
Production supabase http://10.89.97.214:8000 Deployed production apps

Schema Isolation (Unchanged)

Each app uses its own PostgreSQL schema: - home_portal - Home Portal app - money_tracker - Money Tracker app - trip_planner - Trip Planner app - rms - Recipe Management System

This pattern works identically in both sbx and prod.


Migration Phases

Phase 1: Preparation ✅ COMPLETE

  • [x] Confirm supabase-sandbox is running and accessible
  • [x] Verify Node.js on host (node --version → v22.15.0)
  • [x] /root/projects/ converted from symlinks to actual directories
  • [x] Original symlinks backed up to /root/projects-backup-20251204/symlinks.txt

Phase 2: Project Migration ✅ COMPLETE

  • [x] Projects migrated via cp -a from LXC filesystems to host
  • [x] Ownership fixed to root:root (no more UID 100000 issues)
  • [x] Git history preserved for all projects
  • [x] Projects migrated:
  • [x] money-tracker (from LXC 150)
  • [x] home-portal (from LXC 160)
  • [x] rms (from LXC 170)
  • [x] subtitleai (from LXC 180)
  • [x] trip-planner (from LXC 190)
  • [x] brainlearn (from LXC 182)
  • [x] tcg (from LXC 110)

Phase 3: Documentation Updates ✅ COMPLETE

Primary Rewrites

  • [x] docs/workflows/creating-new-app.md - Rewritten for host-based 5-stage workflow
  • [x] docs/reference/CLAUDE.md - Updated for host-based development
  • [x] scripts/scaffold-nextjs.sh - New host-based scaffold script created

Secondary Updates (Optional - can be done incrementally)

  • [ ] docs/applications/*.md - Update paths as needed
  • [ ] docs/operations/proxmox-lxc-operations.md - Mark dev sections as legacy

Script Status

  • [x] Created /root/tower-fleet/scripts/scaffold-nextjs.sh (host-based)
  • LXC templates retained for service containers (non-dev use)

Phase 4: Validation ✅ COMPLETE

  • [x] Verified migrated projects build successfully (home-portal, money-tracker, trip-planner)
  • [x] Git remotes accessible from host
  • [x] No permission issues (files owned by root:root)
  • [ ] Test new app creation workflow (optional - run when needed)

Phase 5: Cleanup ✅ COMPLETE

  • [x] Stopped dev LXC containers: 110, 150, 160, 170, 180, 182, 190
  • [x] Old symlinks replaced with actual directories
  • Containers preserved (not destroyed) - can restart if needed: pct start <id>

New App Creation Workflow (Post-Migration)

Quick Reference

# 1. Create Next.js app
cd /root/projects
npx create-next-app@latest my-app --typescript --tailwind --app --eslint --import-alias '@/*'

# 2. Apply skeleton
cd my-app
/root/tower-fleet/scripts/scaffold-app.sh

# 3. Install dependencies
npm install @supabase/supabase-js @supabase/ssr clsx tailwind-merge lucide-react

# 4. Configure environment
cp .env.local.example .env.local
# Edit: Set NEXT_PUBLIC_SUPABASE_URL and ANON_KEY for sbx

# 5. Initialize Supabase schema (if new app)
# See: docs/workflows/database-migrations.md

# 6. Run dev server
npm run dev

New scaffold-app.sh Script

#!/bin/bash
# scaffold-app.sh - Apply skeleton to Next.js app (host-based)
# Usage: Run from within project directory

set -e

SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
SKELETON_DIR="$SCRIPT_DIR/../scaffolds/nextjs"

if [ ! -f "package.json" ]; then
    echo "Error: Run this script from within a Next.js project directory"
    exit 1
fi

echo "Applying app skeleton..."

# Copy skeleton files
cp -r "$SKELETON_DIR/lib" .
cp -r "$SKELETON_DIR/types" .
cp "$SKELETON_DIR/middleware.ts" .
cp "$SKELETON_DIR/_env.local.example" .env.local.example
cp "$SKELETON_DIR/_gitignore" .gitignore

# Update env template for sbx Supabase
cat > .env.local.example << 'EOF'
# Supabase Configuration
# Development uses supabase-sandbox, production uses supabase

# Development (supabase-sandbox)
NEXT_PUBLIC_SUPABASE_URL=http://10.89.97.221:8000
NEXT_PUBLIC_SUPABASE_ANON_KEY=<get-from-kubectl-secret>

# To get anon key:
# kubectl get secret -n supabase-sandbox supabase-secrets -o jsonpath='{.data.ANON_KEY}' | base64 -d
EOF

echo "Skeleton applied successfully!"
echo ""
echo "Next steps:"
echo "  1. npm install @supabase/supabase-js @supabase/ssr clsx tailwind-merge lucide-react"
echo "  2. cp .env.local.example .env.local"
echo "  3. Edit .env.local with your Supabase credentials"
echo "  4. npm run dev"

Migration Script

scripts/migrate-from-lxc.sh

#!/bin/bash
# migrate-from-lxc.sh - Migrate projects from LXC containers to host
# Usage: ./migrate-from-lxc.sh [project-name]
# Run without args to migrate all projects

set -e

# Project mapping: name -> LXC ID
declare -A PROJECTS=(
    ["money-tracker"]=150
    ["home-portal"]=160
    ["rms"]=170
    ["subtitleai"]=180
    ["trip-planner"]=190
)

HOST_PROJECTS_DIR="/root/projects"
BACKUP_DIR="/root/projects-backup-$(date +%Y%m%d)"

migrate_project() {
    local name=$1
    local lxc_id=$2
    local src="//rpool/data/subvol-${lxc_id}-disk-0/root/${name}"
    local dest="${HOST_PROJECTS_DIR}/${name}"

    echo "Migrating $name (LXC $lxc_id)..."

    # Check source exists
    if [ ! -d "$src" ]; then
        echo "  Warning: Source not found at $src, skipping"
        return 1
    fi

    # Backup existing symlink/directory
    if [ -e "$dest" ]; then
        echo "  Backing up existing $dest"
        mv "$dest" "${BACKUP_DIR}/${name}"
    fi

    # Copy project
    echo "  Copying from $src to $dest"
    cp -a "$src" "$dest"

    # Fix ownership (should be root:root on host)
    chown -R root:root "$dest"

    # Update .env.local if exists
    if [ -f "$dest/.env.local" ]; then
        echo "  Note: Review $dest/.env.local - may need sbx Supabase URL"
    fi

    echo "  Done: $name migrated successfully"
    return 0
}

# Create backup directory
mkdir -p "$BACKUP_DIR"

# Migrate specific project or all
if [ -n "$1" ]; then
    if [ -z "${PROJECTS[$1]}" ]; then
        echo "Unknown project: $1"
        echo "Available: ${!PROJECTS[@]}"
        exit 1
    fi
    migrate_project "$1" "${PROJECTS[$1]}"
else
    echo "Migrating all projects..."
    for name in "${!PROJECTS[@]}"; do
        migrate_project "$name" "${PROJECTS[$name]}" || true
    done
fi

echo ""
echo "Migration complete!"
echo "Backup of previous state: $BACKUP_DIR"
echo ""
echo "Next steps:"
echo "  1. Test each project: cd /root/projects/<name> && npm run dev"
echo "  2. Update .env.local files to use supabase-sbx"
echo "  3. Once verified, stop LXC containers: pct stop <id>"

CLAUDE.md Changes Summary

Sections to Remove

  • LXC container creation workflow
  • pct enter, pct exec examples for dev containers
  • Container filesystem paths (//rpool/data/subvol-*)
  • UID 100000 permission fix instructions
  • Docker-in-LXC Supabase instructions

Sections to Update

  • Project paths: symlinks → actual directories
  • Supabase: local Docker → K8s sbx/prod
  • App creation: 8 stages → 5 steps
  • Dev server: in-container tmux → host tmux

Sections to Add

  • Supabase sbx vs prod distinction
  • Direct host development workflow
  • Simplified scaffold-app.sh usage

Open Questions

  1. Supabase sbx status - Is supabase-sbx or supabase-sandbox the correct namespace? Need to confirm it's running.

  2. LXC template retention - Keep lxc-templates/ for non-dev containers (services, utilities) or archive entirely?

  3. Slash commands - Update /lxc:create-nextjs to new workflow or remove?

  4. Existing tmux sessions - Projects may have tmux sessions in LXCs. Document how to recreate on host?

  5. Git history - Confirm cp -a preserves .git or use rsync with explicit git handling?


Rollback Plan

If issues arise: 1. LXC containers remain intact until explicit destruction 2. Backup directory preserves pre-migration state 3. Symlinks can be recreated if needed 4. LXC containers can be restarted: pct start <id>


Timeline Estimate

Phase Effort Dependencies
Phase 1: Preparation 30 min supabase-sbx running
Phase 2: Project Migration 1-2 hours Phase 1
Phase 3: Documentation 2-3 hours Phase 2
Phase 4: Validation 1 hour Phase 3
Phase 5: Cleanup 30 min Phase 4 validated

Total: 5-7 hours (can be done incrementally)


References