PVC Mount Verification Guide¶
When deploying containerized applications to Kubernetes, incorrect PVC mount paths are a common source of data loss. This guide provides a checklist for verifying persistent storage is correctly configured.
The Problem¶
Container images often use non-standard paths for persistent data. If your PVC mount path doesn't match where the application actually writes data: - Data appears to persist (PVC is bound, no errors) - But on pod restart, all "persistent" data is lost - Application resets to fresh install state
Pre-Deployment Checklist¶
Before deploying a new app, verify:
1. Find the Container's Data Path¶
# Option A: Check official documentation / Dockerfile
# Look for VOLUME declarations or documented data paths
# Option B: Run container temporarily and inspect
docker run --rm -it <image>:<tag> sh -c "ls -la / && ls -la /var && ls -la /app 2>/dev/null"
# Option C: After deployment, exec into container
kubectl exec deployment/<app> -n <namespace> -- ls -la /
kubectl exec deployment/<app> -n <namespace> -- find / -name "*.db" -o -name "*.sqlite" -o -name ".env" 2>/dev/null
2. Check Container User¶
# Find what user the container runs as
kubectl exec deployment/<app> -n <namespace> -- id
# Example output: uid=82(www-data) gid=82(www-data)
# If non-root, you'll need fsGroup or initContainer for permissions
3. Verify Mount Path in Deployment¶
# Check current volume mounts
kubectl get deployment <app> -n <namespace> -o jsonpath='{.spec.template.spec.containers[0].volumeMounts}' | jq .
4. Test Write Permissions¶
# Try to write to the mounted path
kubectl exec deployment/<app> -n <namespace> -- touch /path/to/mount/test
kubectl exec deployment/<app> -n <namespace> -- rm /path/to/mount/test
# If "Permission denied", add fsGroup or initContainer
5. Verify Data is on PVC (not ephemeral)¶
# Check the mounted filesystem
kubectl exec deployment/<app> -n <namespace> -- df -h /path/to/mount
# Should show Longhorn/NFS volume, not overlayfs
# Check mount points
kubectl exec deployment/<app> -n <namespace> -- mount | grep <path>
Common Patterns by App Type¶
| App Type | Common Data Paths | Notes |
|---|---|---|
| Laravel/PHP | /var/www/html/storage, /app/storage |
Often symlinked |
| Node.js | /app/data, /data |
Check package.json |
| Go apps | /data, /app/data |
Often configurable via env |
| Python/Django | /app/data, /var/lib/<app> |
Check settings.py |
| Databases | /var/lib/mysql, /var/lib/postgresql |
Use StatefulSets |
Fixing Permission Issues¶
Option 1: fsGroup (Recommended)¶
Option 2: Init Container¶
spec:
initContainers:
- name: fix-permissions
image: busybox:latest
command: ["sh", "-c", "chown -R <uid>:<gid> /data"]
volumeMounts:
- name: data
mountPath: /data
Option 3: SecurityContext runAsUser¶
Post-Deployment Verification¶
After deploying, always verify persistence survives restart:
# 1. Make a change in the app (create file, change setting)
# 2. Force pod restart
kubectl rollout restart deployment/<app> -n <namespace>
kubectl rollout status deployment/<app> -n <namespace>
# 3. Verify change persisted
kubectl exec deployment/<app> -n <namespace> -- cat /path/to/data/file
Real-World Example: Pelican Panel¶
Problem: OAuth settings lost after pod restart
Diagnosis:
# Container data path: /pelican-data/
# PVC mounted to: /app/var (WRONG!)
# Container user: www-data (uid 82)
# PVC owner: root:root (PERMISSION DENIED)
Fix:
1. Changed mount path to /pelican-data
2. Added fsGroup: 82 and init container for permissions