NFS Storage Pattern for Kubernetes Apps¶
Guide for mounting ZFS/NAS storage into Kubernetes pods via NFS instead of using Longhorn PVCs.
When to Use NFS vs Longhorn¶
Use NFS for: - Large media libraries (ROMs, photos, videos, music) - Read-heavy workloads - Data that needs to be shared across multiple apps - When you need more capacity than Longhorn cluster can provide - Static content that doesn't change frequently
Use Longhorn for: - Databases (PostgreSQL, MariaDB, Redis) - Application config and state - Write-heavy workloads - Small to medium volumes (<50Gi) - Data that needs high availability (3x replication)
Pattern Overview¶
- Create NFS export on NAS or Proxmox host
- Create PersistentVolume pointing to NFS share
- Create PersistentVolumeClaim referencing the PV
- Mount PVC in pod via Helm chart configuration
Example: RomM ROM Library¶
Step 1: Create NFS Export¶
On NAS (LXC 101) or Proxmox host:
# Option A: Export from NAS
pct exec 101 -- mkdir -p /mnt/vault/media/roms
pct exec 101 -- exportfs -o rw,sync,no_subtree_check,no_root_squash 10.89.97.0/24:/mnt/vault/media/roms
# Option B: Export from Proxmox ZFS
mkdir -p /zpool/media/roms
echo "/zpool/media/roms 10.89.97.0/24(rw,sync,no_subtree_check,no_root_squash)" >> /etc/exports
exportfs -ra
Verify export:
Step 2: Create Kubernetes NFS Resources¶
File: manifests/apps/romm/nfs-library.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: romm-library-nfs
spec:
capacity:
storage: 500Gi # Logical size, doesn't enforce quota
accessModes:
- ReadWriteMany # Multiple pods can mount (if needed)
nfs:
server: 10.89.97.89 # NAS IP
path: /mnt/vault/media/roms
mountOptions:
- nfsvers=4
- rw
persistentVolumeReclaimPolicy: Retain # Keep data if PVC deleted
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: romm-library-nfs
namespace: romm
spec:
accessModes:
- ReadWriteMany
storageClassName: "" # Empty = use specific PV by name
volumeName: romm-library-nfs
resources:
requests:
storage: 500Gi
Apply:
kubectl apply -f manifests/apps/romm/nfs-library.yaml
kubectl get pv romm-library-nfs
kubectl get pvc -n romm romm-library-nfs
Step 3: Update Helm Values¶
Remove Longhorn library volume:
# values.yaml - OLD (Longhorn)
persistence:
library:
enabled: true
mountPath: /romm/library
type: pvc
storageClass: longhorn
size: 20Gi
Add NFS library volume:
# values.yaml - NEW (NFS)
persistence:
library:
enabled: true
mountPath: /romm/library
type: pvc
existingClaim: romm-library-nfs # Reference external PVC
Step 4: Redeploy Application¶
# Backup existing library data if needed
kubectl exec -n romm deployment/romm -- tar czf /tmp/library-backup.tar.gz /romm/library
# Uninstall to remove old PVC
helm uninstall romm -n romm
kubectl delete pvc romm-library -n romm
# Deploy with new NFS mount
./manifests/apps/romm/deploy.sh
# Restore data if needed
kubectl cp /tmp/library-backup.tar.gz romm/romm-pod:/tmp/
kubectl exec -n romm deployment/romm -- tar xzf /tmp/library-backup.tar.gz -C /
Real Example: Immich External Photo Library¶
File: manifests/apps/immich/nfs-external-library.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: immich-external-library
spec:
capacity:
storage: 500Gi
accessModes:
- ReadOnlyMany # Read-only for existing photo library
nfs:
server: 10.89.97.89
path: /mnt/vault/media/pictures
mountOptions:
- nfsvers=4
- ro # Read-only mount
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: immich-external-library
namespace: immich
spec:
accessModes:
- ReadOnlyMany
storageClassName: ""
volumeName: immich-external-library
resources:
requests:
storage: 500Gi
Mount in Immich:
# In Immich Helm values or deployment
volumes:
- name: external-library
persistentVolumeClaim:
claimName: immich-external-library
volumeMounts:
- name: external-library
mountPath: /external-library
readOnly: true
Storage Comparison¶
| Feature | Longhorn | NFS |
|---|---|---|
| Replication | 3x copies across nodes | Single source |
| Performance | Good for random I/O | Good for sequential reads |
| Capacity | Limited by cluster storage | Limited by NAS/ZFS pool |
| High Availability | Yes (survives node failure) | No (NAS is single point) |
| Best For | Databases, app state | Media libraries, archives |
| Access Mode | ReadWriteOnce | ReadWriteMany |
NFS Server Options¶
Option 1: NAS Container (LXC 101)¶
Pros: - Dedicated storage appliance - Already configured for media - Existing backup strategy
Cons: - Single point of failure - Limited by container resources
IP: 10.89.97.89
Exports: Check with showmount -e 10.89.97.89
Status: NFS server has dependency issues in LXC container - use Proxmox host instead
Option 2: Proxmox Host ZFS (Recommended)¶
Pros: - Direct access to ZFS pool (vault = 20TB) - No container overhead - Can leverage ZFS features (snapshots, compression) - NFS server already running and configured
Cons: - Ties apps to Proxmox host
IP: 10.89.97.10
Exports: Check with showmount -e 10.89.97.10
Current exports:
/vault/subvol-101-disk-0/media 10.89.97.50 (arr-stack VM)
/vault/subvol-101-disk-0/media/roms 10.89.97.0/24 (k8s cluster)
Adding new exports:
# Create directory
mkdir -p /vault/subvol-101-disk-0/media/<new-path>
chown 1000:1000 /vault/subvol-101-disk-0/media/<new-path>
# Add to /etc/exports
echo "/vault/subvol-101-disk-0/media/<new-path> 10.89.97.0/24(rw,sync,no_subtree_check,no_root_squash)" >> /etc/exports
exportfs -ra
Option 3: Dedicated NFS Provisioner in K8s¶
Use NFS Subdir External Provisioner for dynamic provisioning:
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=10.89.97.89 \
--set nfs.path=/mnt/vault/k8s-volumes
Creates a new StorageClass that provisions NFS PVCs automatically.
Troubleshooting¶
PVC Stuck in Pending¶
# Check PVC events
kubectl describe pvc romm-library-nfs -n romm
# Common issues:
# - PV name doesn't match volumeName in PVC
# - Namespace mismatch
# - accessModes mismatch between PV and PVC
Mount Errors in Pod¶
# Check pod events
kubectl describe pod -n romm -l app.kubernetes.io/name=romm
# Common errors:
# - "mount.nfs: access denied" → Check NFS export permissions
# - "mount.nfs: No route to host" → Check NFS server IP and firewall
# - "mount.nfs: Connection timed out" → Check NFS server is running
Test NFS Mount from Node¶
# SSH to k3s node
ssh root@10.89.97.201
# Test mount manually
mkdir -p /tmp/nfs-test
mount -t nfs -o nfsvers=4 10.89.97.89:/mnt/vault/media/roms /tmp/nfs-test
ls /tmp/nfs-test
umount /tmp/nfs-test
Permission Issues¶
# Check ownership on NFS export
pct exec 101 -- ls -la /mnt/vault/media/roms
# RomM runs as root (UID 0) by default, so needs matching permissions
# Fix ownership if needed:
pct exec 101 -- chown -R 0:0 /mnt/vault/media/roms
Migration Steps (Longhorn → NFS)¶
-
Backup existing data:
-
Scale down app:
-
Create NFS export and K8s resources:
-
Update Helm values:
-
Redeploy:
-
Restore data:
-
Verify and cleanup:
Documentation Status¶
Apps using NFS:
- RomM: ROM library mounted from /vault/subvol-101-disk-0/media/roms (20TB available)
- Immich: Photo library mounted from /vault/subvol-101-disk-0/media/immich (20TB available)
Apps that could benefit: - Plex/Jellyfin: Media libraries (if deployed)
Documented:
- /root/tower-fleet/docs/infrastructure/storage.md - General storage overview
- /root/tower-fleet/docs/reference/nfs-storage-pattern.md - This document (NFS-specific pattern)
Related Documentation: - Storage Infrastructure - App Conventions - Creating New Apps