Production Deployment Guide¶
Last Updated: 2025-11-10 Applies To: Kubernetes deployments (production environment)
This guide explains how to deploy Next.js applications to Kubernetes for production use.
Overview¶
Production Environment (Kubernetes): - High availability (pod restarts on failure) - LoadBalancer for external access - Environment variables via ConfigMaps - Persistent storage via Longhorn - Automatic health checks
Development Environment (Host-based):
- See Development Environment Guide
- All dev work happens in /root/projects/ on Proxmox host
Prerequisites¶
✅ Application tested in dev environment ✅ Docker image built (or using build-on-deploy) ✅ Kubernetes cluster running with: - Longhorn storage - MetalLB LoadBalancer - Supabase deployed
Verify prerequisites:
# Check cluster
kubectl get nodes
# All nodes should be Ready
# Check storage
kubectl get sc
# longhorn should be (default)
# Check LoadBalancer
kubectl get pods -n metallb-system
# controller and speakers Running
# Check Supabase
kubectl get pods -n supabase
# All pods Running
Deployment Options¶
Option A: Direct Kubernetes Manifests (Recommended)¶
Full control over deployment configuration.
Option B: Using Docker Build¶
Build Docker image and deploy from registry.
Option C: GitOps with Flux (Advanced)¶
Automated deployments from git repository.
This guide covers Option A (Direct Manifests).
Phase 1: Prepare Application¶
Step 1.1: Build Production Next.js App¶
On the Proxmox host:
# Navigate to your project
cd /root/projects/home-portal
# Build for production
npm run build
# Test production build locally
npm start
# If it works locally, proceed
Step 1.2: Create .dockerignore File¶
CRITICAL: Before creating the Dockerfile, create a .dockerignore file to prevent development files from being included in production builds.
# /root/home-portal/.dockerignore
# Development environment files
.env.local
.env.*.local
# Node modules (handled by multi-stage build)
node_modules
# Next.js build output
.next
# Git
.git
.gitignore
# IDE
.vscode
.idea
# OS
.DS_Store
Thumbs.db
# Testing
coverage
# Documentation
README.md
*.md
Why this matters: Next.js prioritizes .env.local over .env.production EVEN in production builds. Without .dockerignore, your Docker image will contain dev credentials that override production values.
Step 1.3: Create Dockerfile (if needed)¶
If you don't have a Dockerfile, create one:
# /root/home-portal/Dockerfile
FROM node:20-alpine AS base
# Install dependencies
FROM base AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci
# Build application
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# Production image
FROM base AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
ENV HOSTNAME "0.0.0.0"
CMD ["node", "server.js"]
Update next.config.js for standalone:
Step 1.3: Build and Push Docker Image (Optional)¶
If using a container registry:
# Build image
docker build -t home-portal:latest .
# Tag for registry
docker tag home-portal:latest your-registry.com/home-portal:v1.0.0
# Push to registry
docker push your-registry.com/home-portal:v1.0.0
Or use local image by building on a k3s node.
Phase 2: Create Kubernetes Resources¶
Step 2.1: Create Namespace¶
Step 2.2: Create ConfigMap¶
Store environment variables in a ConfigMap:
# Get Supabase credentials
SUPABASE_ANON_KEY=$(kubectl get secret -n supabase supabase-secrets -o jsonpath='{.data.ANON_KEY}' | base64 -d)
# Create ConfigMap
cat > /tmp/home-portal-config.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: home-portal-config
namespace: home-portal
data:
# Supabase Configuration
NEXT_PUBLIC_SUPABASE_URL: "http://10.89.97.214:8000"
NEXT_PUBLIC_SUPABASE_ANON_KEY: "$SUPABASE_ANON_KEY"
# Service URLs (home-portal specific)
JELLYFIN_URL: "http://10.89.97.97:8096"
PLEX_URL: "http://10.89.97.120:32400"
PROXMOX_URL: "https://10.89.97.10:8006"
# Application Settings
SERVICE_POLL_INTERVAL: "30"
NODE_ENV: "production"
EOF
# Apply ConfigMap
kubectl apply -f /tmp/home-portal-config.yaml
# Verify
kubectl get configmap -n home-portal home-portal-config
Step 2.3: Create Deployment¶
Create the deployment manifest:
cat > /tmp/home-portal-deployment.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: home-portal
namespace: home-portal
labels:
app: home-portal
spec:
replicas: 1
selector:
matchLabels:
app: home-portal
template:
metadata:
labels:
app: home-portal
spec:
containers:
- name: home-portal
image: your-registry.com/home-portal:v1.0.0 # ← Update this
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http
protocol: TCP
env:
# Import all ConfigMap values as env vars
- name: NEXT_PUBLIC_SUPABASE_URL
valueFrom:
configMapKeyRef:
name: home-portal-config
key: NEXT_PUBLIC_SUPABASE_URL
- name: NEXT_PUBLIC_SUPABASE_ANON_KEY
valueFrom:
configMapKeyRef:
name: home-portal-config
key: NEXT_PUBLIC_SUPABASE_ANON_KEY
- name: JELLYFIN_URL
valueFrom:
configMapKeyRef:
name: home-portal-config
key: JELLYFIN_URL
- name: PLEX_URL
valueFrom:
configMapKeyRef:
name: home-portal-config
key: PLEX_URL
- name: PROXMOX_URL
valueFrom:
configMapKeyRef:
name: home-portal-config
key: PROXMOX_URL
- name: SERVICE_POLL_INTERVAL
valueFrom:
configMapKeyRef:
name: home-portal-config
key: SERVICE_POLL_INTERVAL
- name: NODE_ENV
valueFrom:
configMapKeyRef:
name: home-portal-config
key: NODE_ENV
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
EOF
# Apply Deployment
kubectl apply -f /tmp/home-portal-deployment.yaml
# Watch pods start
kubectl get pods -n home-portal -w
# Press Ctrl+C when Running
If you don't have a /api/health endpoint, remove the probes or create one:
Step 2.4: Create Service (LoadBalancer)¶
Expose your application externally:
cat > /tmp/home-portal-service.yaml << 'EOF'
apiVersion: v1
kind: Service
metadata:
name: home-portal
namespace: home-portal
labels:
app: home-portal
spec:
type: LoadBalancer
loadBalancerIP: 10.89.97.213 # ← Choose available IP from MetalLB pool
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
selector:
app: home-portal
EOF
# Apply Service
kubectl apply -f /tmp/home-portal-service.yaml
# Check LoadBalancer IP assigned
kubectl get svc -n home-portal home-portal
# EXTERNAL-IP should show 10.89.97.213
Phase 3: Verification¶
Step 3.1: Check Pod Status¶
kubectl get pods -n home-portal
# Should show:
# NAME READY STATUS RESTARTS AGE
# home-portal-xxxxxxxxx-xxxxx 1/1 Running 0 2m
Step 3.2: Check Logs¶
kubectl logs -n home-portal -l app=home-portal --tail=50
# Should see Next.js startup logs
# Look for: "Ready on http://0.0.0.0:3000"
Step 3.3: Test Access¶
Open in browser:
You should see your application!
Phase 4: Configuration Management¶
Updating Environment Variables¶
Step 1: Edit ConfigMap
# Edit ConfigMap
kubectl edit configmap -n home-portal home-portal-config
# Or update from file
kubectl apply -f /tmp/home-portal-config.yaml
Step 2: Restart Deployment
# Restart pods to pick up new config
kubectl rollout restart deployment -n home-portal home-portal
# Wait for rollout
kubectl rollout status deployment -n home-portal home-portal
Updating Application Code¶
Option A: Push new Docker image
# Build new version
docker build -t your-registry.com/home-portal:v1.0.1 .
docker push your-registry.com/home-portal:v1.0.1
# Update deployment
kubectl set image deployment/home-portal \
home-portal=your-registry.com/home-portal:v1.0.1 \
-n home-portal
# Or edit deployment
kubectl edit deployment -n home-portal home-portal
# Change image tag to v1.0.1
Option B: Rebuild and redeploy
Scaling¶
Increase Replicas¶
# Scale to 3 replicas for high availability
kubectl scale deployment home-portal --replicas=3 -n home-portal
# Verify
kubectl get pods -n home-portal
# Should show 3 pods
# LoadBalancer automatically load-balances across all pods
Rollback¶
If a deployment goes wrong:
# View deployment history
kubectl rollout history deployment -n home-portal home-portal
# Rollback to previous version
kubectl rollout undo deployment -n home-portal home-portal
# Rollback to specific revision
kubectl rollout undo deployment -n home-portal home-portal --to-revision=2
Monitoring¶
Check Resource Usage¶
View Events¶
Stream Logs¶
# Follow logs in real-time
kubectl logs -n home-portal -l app=home-portal -f
# Logs from specific pod
kubectl logs -n home-portal home-portal-xxxxxxxxx-xxxxx -f
Troubleshooting¶
Issue: Pod stuck in Pending¶
Check:
kubectl describe pod -n home-portal <pod-name>
# Look for:
# - Insufficient resources
# - No nodes available
# - PVC binding issues
Solution:
Issue: Pod CrashLoopBackOff¶
Check logs:
Common causes: - Missing environment variables - Application startup errors - Health probe failures
Issue: Can't access via LoadBalancer IP¶
Check Service:
kubectl get svc -n home-portal home-portal
# Verify EXTERNAL-IP is assigned
# Check MetalLB
kubectl get pods -n metallb-system
Test from inside cluster:
# Create test pod
kubectl run -it --rm debug --image=curlimages/curl -- sh
# Test internal service
curl http://home-portal.home-portal.svc.cluster.local
Issue: Environment variables not updating¶
Cause: Pods need restart to pick up new ConfigMap values
Solution:
Best Practices¶
Resource Management¶
resources:
requests: # Guaranteed resources
cpu: 100m
memory: 256Mi
limits: # Maximum resources
cpu: 500m
memory: 512Mi
Guidelines:
- Set requests lower than actual usage (for scheduling)
- Set limits to prevent runaway processes
- Monitor actual usage with kubectl top pods
Health Checks¶
Always configure health probes:
livenessProbe: # Restart pod if fails
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe: # Remove from load balancer if fails
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
Image Management¶
DO:
✅ Use specific image tags (:v1.0.0, not :latest)
✅ Keep images in a registry (Docker Hub, GitHub Container Registry)
✅ Tag images with git commit SHA for traceability
DON'T:
❌ Use :latest tag (makes rollbacks impossible)
❌ Store images only locally (node failures = lost images)
Configuration¶
DO: ✅ Use ConfigMaps for non-sensitive config ✅ Use Secrets for passwords, API keys ✅ Use separate ConfigMaps per environment (dev, staging, prod)
DON'T: ❌ Hardcode values in deployment manifests ❌ Commit secrets to git
Deployment Checklist¶
Before deploying to production:
- [ ] Tested in dev environment
- [ ] Built production Docker image
- [ ] Created namespace
- [ ] Created ConfigMap with all env vars
- [ ] Created Secrets (if needed)
- [ ] Configured health probes
- [ ] Set resource requests/limits
- [ ] Chose LoadBalancer IP
- [ ] Applied deployment
- [ ] Verified pods running
- [ ] Tested external access
- [ ] Checked logs for errors
- [ ] Updated documentation
Example: Complete Deployment¶
Full example for deploying money-tracker:
# 1. Create namespace
kubectl create namespace money-tracker
# 2. Get Supabase credentials
ANON_KEY=$(kubectl get secret -n supabase supabase-secrets -o jsonpath='{.data.ANON_KEY}' | base64 -d)
# 3. Create ConfigMap
kubectl create configmap money-tracker-config \
--namespace=money-tracker \
--from-literal=NEXT_PUBLIC_SUPABASE_URL="http://10.89.97.214:8000" \
--from-literal=NEXT_PUBLIC_SUPABASE_ANON_KEY="$ANON_KEY" \
--from-literal=NODE_ENV="production"
# 4. Create deployment (using manifest from Step 2.3, adapted)
kubectl apply -f /tmp/money-tracker-deployment.yaml
# 5. Create LoadBalancer service
kubectl apply -f /tmp/money-tracker-service.yaml
# 6. Verify
kubectl get all -n money-tracker
# 7. Access
curl http://10.89.97.239 # Or chosen LoadBalancer IP
CI/CD Integration (Future)¶
For automated deployments:
- Git Push → Triggers CI pipeline
- Build Docker image
- Tag image with git SHA
- Push to registry
- Update k8s deployment
- Verify health checks pass
Tools: - GitHub Actions - GitLab CI - FluxCD (GitOps) - ArgoCD (GitOps)
Saving Manifests¶
Store your manifests for version control:
# Create manifests directory
mkdir -p /root/k8s/manifests/home-portal
# Save manifests
cp /tmp/home-portal-config.yaml /root/k8s/manifests/home-portal/
cp /tmp/home-portal-deployment.yaml /root/k8s/manifests/home-portal/
cp /tmp/home-portal-service.yaml /root/k8s/manifests/home-portal/
# Optional: Create namespace manifest
cat > /root/k8s/manifests/home-portal/namespace.yaml << 'EOF'
apiVersion: v1
kind: Namespace
metadata:
name: home-portal
EOF
Redeploy anytime:
Multi-Environment Strategy¶
Development: Host-based (/root/projects/) with K8s Supabase Sandbox
Staging: K8s namespace app-staging with shared K8s Supabase
Production: K8s namespace app-prod with production Supabase
Example namespaces:
- home-portal-dev (testing new features)
- home-portal-staging (pre-production)
- home-portal (production)
Each with separate ConfigMaps pointing to different Supabase databases or schemas.
Multi-Service Applications¶
Some applications require multiple services beyond just the web app (e.g., background workers, pollers, schedulers).
Example: SubtitleAI Worker Architecture¶
Components: 1. Next.js Web App - UI for uploading videos and viewing jobs 2. Celery Worker - Background task processor for subtitle generation 3. Database Poller - Finds pending jobs and enqueues to Celery 4. Redis - Message broker for Celery tasks
Deployment Strategy¶
Development:
- Web app: /root/projects/subtitleai on host with K8s Supabase Sandbox
- Worker services: Docker Compose (redis, worker, poller) or K8s
- Connected via K8s Supabase database
Production (K8s):
Option A: Separate Deployments (Recommended)
# subtitleai-web deployment (Next.js)
apiVersion: apps/v1
kind: Deployment
metadata:
name: subtitleai-web
namespace: subtitleai
spec:
replicas: 2
template:
spec:
containers:
- name: web
image: subtitleai-web:v1.0.0
ports:
- containerPort: 3000
---
# subtitleai-worker deployment (Celery)
apiVersion: apps/v1
kind: Deployment
metadata:
name: subtitleai-worker
namespace: subtitleai
spec:
replicas: 1 # Only 1 replica to avoid concurrent processing
template:
spec:
containers:
- name: worker
image: subtitleai-worker:v1.0.0
env:
- name: REDIS_URL
value: "redis://redis-service:6379/0"
- name: SUPABASE_URL
valueFrom:
configMapKeyRef:
name: subtitleai-config
key: SUPABASE_URL
- name: SUPABASE_SERVICE_KEY
valueFrom:
secretKeyRef:
name: subtitleai-secrets
key: SUPABASE_SERVICE_KEY
---
# subtitleai-poller deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: subtitleai-poller
namespace: subtitleai
spec:
replicas: 1 # Only 1 replica needed
template:
spec:
containers:
- name: poller
image: subtitleai-worker:v1.0.0
command: ["python3", "poller.py"]
env:
- name: REDIS_URL
value: "redis://redis-service:6379/0"
- name: SUPABASE_URL
valueFrom:
configMapKeyRef:
name: subtitleai-config
key: SUPABASE_URL
- name: SUPABASE_SERVICE_KEY
valueFrom:
secretKeyRef:
name: subtitleai-secrets
key: SUPABASE_SERVICE_KEY
---
# Redis deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: subtitleai
spec:
replicas: 1
template:
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
---
# Redis service
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: subtitleai
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
Option B: Single Pod with Multiple Containers (Not Recommended)
Uses sidecar pattern but tightly couples services.
Key Considerations¶
Worker Scaling:
- DON'T scale worker replicas > 1 if processing must be sequential
- Use worker_prefetch_multiplier=1 in Celery to process one job at a time
- Scale by increasing worker CPU/memory resources, not replicas
Poller Pattern: - Database polling avoids complex message queue setup - Simple and reliable for low-traffic applications - Poll interval: 5-10 seconds is sufficient - Only 1 poller replica needed (multiple pollers may create duplicate tasks)
Redis: - Use persistent volume if task history is important - For stateless tasks, ephemeral Redis is fine (tasks lost on restart) - Consider Redis Sentinel for HA (overkill for homelab)
Secrets Management:
- Worker needs SUPABASE_SERVICE_KEY to bypass RLS
- Web app uses SUPABASE_ANON_KEY for user-scoped access
- Store service key in Kubernetes Secret, not ConfigMap
Health Checks¶
Web App:
Worker:
livenessProbe:
exec:
command:
- celery
- -A
- worker
- inspect
- ping
initialDelaySeconds: 30
periodSeconds: 30
Poller:
livenessProbe:
exec:
command:
- pgrep
- -f
- "python3 poller.py"
initialDelaySeconds: 10
periodSeconds: 10
Migration to K8s¶
Steps: 1. Build Docker images for web and worker 2. Create namespace and ConfigMap 3. Create Secret with service role key 4. Deploy Redis first 5. Deploy worker (depends on Redis) 6. Deploy poller (depends on Redis and database) 7. Deploy web app last 8. Update dev server to point to K8s services
Testing:
# Check all services running
kubectl get pods -n subtitleai
# Test job creation (web → database)
curl -X POST http://subtitleai-web-service/api/jobs
# Check worker picked up job (database → poller → worker)
kubectl logs -n subtitleai -l app=subtitleai-poller
kubectl logs -n subtitleai -l app=subtitleai-worker
Related Documentation¶
- Development Environment - Local dev setup
- Supabase Architecture - How Supabase works
- Supabase Deployment - Deploy Supabase
Next Steps¶
- Set up monitoring (Grafana dashboards)
- Configure backups (Longhorn snapshots)
- Implement CI/CD pipeline
- Add SSL/TLS certificates (cert-manager)
- Set up logging aggregation
Questions?
Check the Troubleshooting Guide or review project-specific READMEs.