Development Environment Setup¶
Last Updated: 2025-12-08 Applies To: Host-based development (home-portal, money-tracker, etc.)
This guide explains how to set up your local development environment on the Proxmox host for working on Next.js applications.
Overview¶
Development Environment (Proxmox Host):
- Location: /root/projects/<app-name>
- Runtime: Node.js 22 (on host)
- Database: Kubernetes Supabase Sandbox (supabase-sandbox namespace)
- Persistence: tmux sessions
- Access: Direct file editing via Claude Code or SSH
Production Environment (Kubernetes):
- Namespace: App-specific (e.g., money-tracker)
- Database: Kubernetes Supabase Production (supabase namespace)
- Access: LoadBalancer / Ingress
- See Production Deployment Guide
Prerequisites¶
- SSH Access to Proxmox host (
root@10.89.97.10) - Node.js 22+ installed on host (Verify:
node --version) - K8s Supabase Sandbox running (Verify:
kubectl get pods -n supabase-sandbox)
Host-Based Development Workflow¶
1. Project Navigation¶
All active projects are located in /root/projects/.
2. Environment Configuration¶
Each project uses a .env.local file to connect to the Supabase Sandbox.
Supabase Environments:
| Environment | Namespace | Kong URL | Use Case |
|-------------|-----------|----------|----------|
| Sandbox | supabase-sandbox | http://10.89.97.221:8000 | Local development |
| Production | supabase | http://10.89.97.214:8000 | Deployed apps |
Quick Environment Switch Script:
# Switch to Sandbox (recommended for development)
/root/tower-fleet/scripts/switch-supabase-env.sh sandbox /root/projects/<app-name>
# Switch to Production (use with caution)
/root/tower-fleet/scripts/switch-supabase-env.sh prod /root/projects/<app-name>
Manual Configuration - All Required Keys:
Apps using Authentik + Supabase need 4 keys that must all come from the same environment:
| Key | Purpose | Required For |
|---|---|---|
NEXT_PUBLIC_SUPABASE_URL |
Kong gateway URL | All Supabase calls |
NEXT_PUBLIC_SUPABASE_ANON_KEY |
Public API key | Client-side queries |
SUPABASE_SERVICE_ROLE_KEY |
Admin API key | Server-side bypassing RLS |
SUPABASE_JWT_SECRET |
JWT signing secret | Custom JWT creation (Authentik bridge) |
Get all keys for an environment:
# Set namespace (supabase-sandbox or supabase)
NS=supabase-sandbox
# Get all keys
echo "ANON_KEY:"
kubectl get secret -n $NS supabase-secrets -o jsonpath='{.data.ANON_KEY}' | base64 -d && echo ""
echo "SERVICE_ROLE_KEY:"
kubectl get secret -n $NS supabase-secrets -o jsonpath='{.data.SERVICE_ROLE_KEY}' | base64 -d && echo ""
echo "JWT_SECRET:"
kubectl get secret -n $NS supabase-secrets -o jsonpath='{.data.JWT_SECRET}' | base64 -d && echo ""
Standard .env.local Template (Sandbox):
# Supabase Configuration - ALL keys must be from the SAME environment!
NEXT_PUBLIC_SUPABASE_URL=http://10.89.97.221:8000
NEXT_PUBLIC_SUPABASE_ANON_KEY=<from-kubectl-secret>
SUPABASE_SERVICE_ROLE_KEY=<from-kubectl-secret>
SUPABASE_JWT_SECRET=<from-kubectl-secret>
Common Mistake: Mixing keys from different environments causes JWSInvalidSignature errors.
The JWT_SECRET must match the secret used to sign ANON_KEY and SERVICE_ROLE_KEY.
3. Running the Dev Server¶
Use tmux to keep dev servers running persistently.
# Start a new session
tmux new -s <project-name>
# Navigate and start
cd /root/projects/<project-name>
npm run dev
# Detach from session
Ctrl+B, then D
Reattach to an existing session:
4. Accessing the App¶
By default, Next.js runs on port 3000.
- URL: http://10.89.97.10:3000 (Host IP)
- Local: http://localhost:3000
If running multiple apps simultaneously, specify a different port:
Supabase Development Strategy¶
We use a shared Supabase instance in Kubernetes for development (supabase-sandbox), but isolate apps using PostgreSQL Schemas.
Schema Isolation¶
Each app gets its own schema in the shared database:
- home_portal
- money_tracker
- trip_planner
- rms
Creating a New App Schema¶
When starting a new project, you must create its schema in the sandbox:
# 1. Connect to Sandbox DB
kubectl exec -it -n supabase-sandbox postgres-0 -- psql -U postgres
# 2. Run SQL commands
CREATE SCHEMA my_new_app;
GRANT USAGE ON SCHEMA my_new_app TO postgres, authenticated, service_role, anon;
GRANT ALL ON SCHEMA my_new_app TO postgres;
ALTER DEFAULT PRIVILEGES IN SCHEMA my_new_app GRANT ALL ON TABLES TO postgres, authenticated, service_role;
ALTER DEFAULT PRIVILEGES IN SCHEMA my_new_app GRANT SELECT ON TABLES TO anon;
\q
Exposing Schema via PostgREST¶
You must tell Supabase to expose the new schema via the API.
- Edit ConfigMap:
- Update
PGRST_DB_SCHEMA: Add your schema to the list (comma-separated). - Restart PostgREST:
Database Migrations¶
NEVER create tables manually. Always use migrations.
- Create Migration:
- Edit SQL:
File located in
supabase/migrations/<timestamp>_description.sql. - Apply to Sandbox:
Troubleshooting¶
Port 3000 already in use¶
Supabase Connection Failed¶
- Verify
supabase-sandboxpods are running: - Check
.env.localhas the correct URL for your environment: - Sandbox:
http://10.89.97.221:8000 - Production:
http://10.89.97.214:8000 - Verify your IP is allowed (if Network Policies are active).
"Relation not found"¶
Ensure your client is querying the correct schema.
In lib/supabase/client.ts:
Legacy LXC Containers¶
We previously used LXC containers (IDs 150, 160, 170, 180, 190) for development. These are DEPRECATED and should be kept stopped unless needed for archival reference.
To migrate a legacy project:
1. Copy files from /rpool/data/subvol-XXX... to /root/projects/.
2. Update .env.local to point to K8s Supabase.
3. Fix file permissions (chown -R root:root .).