Supabase Consolidation Plan¶
Status: Ready for Execution Date: 2025-01-10 Author: Claude
Executive Summary¶
Consolidate two Supabase instances (supabase and supabase-sandbox) into one canonical instance. This eliminates confusion, reduces resource usage, and prevents future data divergence.
Decision: Keep supabase (prod) as canonical - it has more data and larger storage.
Current State¶
supabase (prod) - KEEP¶
- Kong IP: 10.89.97.214:8000
- Studio IP: 10.89.97.215:3000
- Postgres storage: 20Gi
- Status: Full stack now running (scaled up 2025-01-10)
Schemas: | Schema | Tables | Notes | |--------|--------|-------| | home_portal | 5 | 48 services (more than sandbox) | | money_tracker | 9 | 819 transactions | | trip_planner | 7 | 3 trips (more than sandbox) | | subtitleai | 5 | 22 subtitles | | palimpsest | 16 | Unique to prod | | brain_learn | 4 | Unique to prod | | vault_core | 10 | | | rms | 9 | | | authentik | 187 | Template schema |
supabase-sandbox - DELETE AFTER MIGRATION¶
- Kong IP: 10.89.97.221:8000
- Studio IP: 10.89.97.222:3000
- Postgres storage: 10Gi
- Status: Full stack running
Schemas: | Schema | Tables | Notes | |--------|--------|-------| | home_portal | 5 | 28 services (less than prod) | | money_tracker | 9 | 819 transactions (same as prod) | | trip_planner | 7 | 1 trip (less than prod) | | subtitleai | 5 | 22 subtitles (same as prod) | | notes_app | 1 | Unique to sandbox - MIGRATE | | replyflow | 6 | Unique to sandbox - MIGRATE | | rpg | 16 | Unique to sandbox - MIGRATE | | vault_core | 9 | | | rms | 9 | | | authentik | 187 | Template schema |
Migration Steps¶
Phase 1: Pre-Migration Verification (15 min)¶
# 1. Verify prod Supabase is healthy
kubectl get pods -n supabase
# 2. Verify all services responding
curl -s -o /dev/null -w "%{http_code}" http://10.89.97.214:8000/rest/v1/
# Expected: 401
# 3. Backup both databases
kubectl exec -n supabase postgres-0 -- pg_dump -U postgres -Fc postgres > /root/backups/supabase-prod-$(date +%Y%m%d).dump
kubectl exec -n supabase-sandbox postgres-0 -- pg_dump -U postgres -Fc postgres > /root/backups/supabase-sandbox-$(date +%Y%m%d).dump
Phase 2: Migrate Sandbox-Only Schemas to Prod (30 min)¶
Schemas to migrate: notes_app, replyflow, rpg
# Export schemas from sandbox
kubectl exec -n supabase-sandbox postgres-0 -- pg_dump -U postgres -n notes_app -Fc postgres > /tmp/notes_app.dump
kubectl exec -n supabase-sandbox postgres-0 -- pg_dump -U postgres -n replyflow -Fc postgres > /tmp/replyflow.dump
kubectl exec -n supabase-sandbox postgres-0 -- pg_dump -U postgres -n rpg -Fc postgres > /tmp/rpg.dump
# Copy to prod pod
kubectl cp /tmp/notes_app.dump supabase/postgres-0:/tmp/notes_app.dump
kubectl cp /tmp/replyflow.dump supabase/postgres-0:/tmp/replyflow.dump
kubectl cp /tmp/rpg.dump supabase/postgres-0:/tmp/rpg.dump
# Import to prod
kubectl exec -n supabase postgres-0 -- pg_restore -U postgres -d postgres -n notes_app /tmp/notes_app.dump
kubectl exec -n supabase postgres-0 -- pg_restore -U postgres -d postgres -n replyflow /tmp/replyflow.dump
kubectl exec -n supabase postgres-0 -- pg_restore -U postgres -d postgres -n rpg /tmp/rpg.dump
# Verify
kubectl exec -n supabase postgres-0 -- psql -U postgres -c "\dn" | grep -E "notes_app|replyflow|rpg"
Phase 3: Update Application Configs (20 min)¶
Apps currently pointing to sandbox (10.89.97.221) → Update to prod (10.89.97.214):
Check each app's ConfigMap:
for ns in home-portal money-tracker trip-planner notes-app tcg palimpsest-api palimpsest-web subtitleai music-control vault-platform; do
echo "=== $ns ==="
kubectl get cm -n $ns -o yaml 2>/dev/null | grep -i supabase || echo "No configmap or no supabase ref"
done
Update any pointing to sandbox:
# Example for an app pointing to sandbox:
kubectl patch cm <app>-config -n <app> --type='json' \
-p='[{"op": "replace", "path": "/data/NEXT_PUBLIC_SUPABASE_URL", "value": "http://10.89.97.214:8000"}]'
kubectl rollout restart deployment/<app> -n <app>
Phase 4: Update Manifests in Git (10 min)¶
Update all references in tower-fleet:
# Find all sandbox references
grep -r "10.89.97.221" /root/tower-fleet/manifests/
grep -r "10.89.97.222" /root/tower-fleet/manifests/
grep -r "supabase-sandbox" /root/tower-fleet/manifests/
# Update to prod IPs
# 10.89.97.221 → 10.89.97.214 (Kong)
# 10.89.97.222 → 10.89.97.215 (Studio)
Phase 5: Update Documentation (10 min)¶
Files to update:
- /root/tower-fleet/docs/applications/overview.md - Supabase URLs
- /root/tower-fleet/CLAUDE.md - Quick reference
- Any app-specific docs referencing sandbox
Phase 6: Verification (15 min)¶
# Test each app can reach Supabase
for ns in home-portal money-tracker trip-planner; do
echo "=== Testing $ns ==="
kubectl logs -n $ns -l app=$ns --tail=10 | grep -i "error\|supabase" || echo "No errors"
done
# Verify data is accessible
# Open each app in browser and confirm data loads
Phase 7: Scale Down Sandbox (5 min)¶
Don't delete yet - keep as backup for 1 week:
# Scale down all sandbox services
kubectl scale deployment --all -n supabase-sandbox --replicas=0
kubectl scale statefulset --all -n supabase-sandbox --replicas=0
Phase 8: Delete Sandbox (After 1 Week)¶
# Final backup
kubectl exec -n supabase-sandbox postgres-0 -- pg_dump -U postgres -Fc postgres > /root/backups/supabase-sandbox-final-$(date +%Y%m%d).dump
# Delete namespace
kubectl delete namespace supabase-sandbox
# Remove manifests
rm -rf /root/tower-fleet/manifests/supabase-sandbox/
git add -A && git commit -m "chore: remove supabase-sandbox after consolidation"
Post-Migration: Update Recovery Script¶
Update /root/tower-fleet/scripts/post-reboot-recovery.sh to include prod Supabase services:
# Add to SERVICES array:
declare -A SERVICES=(
["authentik"]="authentik-server authentik-worker"
["home-portal"]="home-portal"
["supabase"]="kong rest gotrue storage" # ADD THIS
# ... other services
)
Rollback Plan¶
If issues occur after migration:
-
Restore sandbox:
-
Revert app configs:
-
Investigate and fix before retrying
Timeline¶
| Phase | Duration | Risk |
|---|---|---|
| 1. Pre-migration verification | 15 min | None |
| 2. Schema migration | 30 min | Low (additive) |
| 3. Update app configs | 20 min | Medium (downtime) |
| 4. Update git manifests | 10 min | None |
| 5. Update docs | 10 min | None |
| 6. Verification | 15 min | None |
| 7. Scale down sandbox | 5 min | Low |
| Total | ~2 hours | |
| 8. Delete sandbox | 5 min | After 1 week |
Success Criteria¶
- [ ] All apps connect to prod Supabase (10.89.97.214)
- [ ] All sandbox-unique schemas (notes_app, replyflow, rpg) exist in prod
- [ ] No references to sandbox IPs in manifests or configs
- [ ] Documentation updated
- [ ] Sandbox namespace deleted (after 1 week)
- [ ] Recovery script updated to include Supabase services