Proxmox & LXC Operations Guide¶
This guide covers infrastructure-level operations for Proxmox host, LXC containers, VMs, and storage management.
Table of Contents¶
- LXC Container Management
- Storage & File Share Management
- Network Discovery (Avahi/mDNS)
- UID/GID Mapping & Permissions
- VM Management
- Network Operations
- Backup & Recovery
LXC Container Management¶
Basic Container Operations¶
List all containers:
Start/stop/restart containers:
Enter container interactively:
Execute command in container:
View container configuration:
Edit container configuration:
# Configuration file location
nano /etc/pve/lxc/101.conf
# Or use pct set commands
pct set 101 --memory 2048
pct set 101 --cores 4
Access container filesystem from host:
# Read-only access from Proxmox host
ls -la //rpool/data/subvol-101-disk-0/
cat //rpool/data/subvol-101-disk-0/etc/samba/smb.conf
Process Management in Containers¶
Note: Development LXC containers (150, 160, 170, 190) are deprecated. Development now runs directly on the Proxmox host in
/root/projects/. These commands are useful for other LXC services.
Find processes by port:
# Using ss (preferred - always available)
pct exec <CT_ID> -- ss -tlnp | grep 3000
# Output: LISTEN 0 511 *:3000 *:* users:(("next-server",pid=2572,fd=19))
# Using lsof (if installed)
pct exec <CT_ID> -- lsof -i :3000
Find processes by name:
Kill processes:
# Kill by PID (get PID from ss output above)
pct exec <CT_ID> -- kill <PID>
# Kill by process name
pct exec <CT_ID> -- pkill -f "next dev"
# Force kill all node processes
pct exec <CT_ID> -- pkill -9 node
Background process management:
# Start process in background with log
pct exec <CT_ID> -- bash -c 'cd /root/app && npm run dev > /tmp/dev.log 2>&1 &'
# Check log output
pct exec <CT_ID> -- tail -50 /tmp/dev.log
# Follow log in real-time
pct exec <CT_ID> -- tail -f /tmp/dev.log
Storage & File Share Management¶
LXC 101 (NAS) - File Server¶
Container details:
- ID: 101
- IP: 10.89.97.89
- Role: Network-attached storage, Samba/NFS server
- Mount point on host: /vault/subvol-101-disk-0/
- Mount point in container: /mnt/vault/
Services running: - Samba (SMB): File sharing for macOS/Windows - NFS: File sharing for Linux/VMs (VM 100 arr-stack) - Avahi (mDNS): Network discovery for Apple TV, macOS, iOS
Check Samba status:
Edit Samba configuration:
# View config
pct exec 101 -- cat /etc/samba/smb.conf
# Edit from host
nano //rpool/data/subvol-101-disk-0/etc/samba/smb.conf
# Restart Samba
pct exec 101 -- systemctl restart smbd nmbd
Check NFS exports:
View active connections:
# Samba connections
pct exec 101 -- smbstatus --shares
# NFS connections
pct exec 101 -- showmount -a
VM 100 (arr-stack) - Media Automation¶
NFS mounts from LXC 101:
ssh root@10.89.97.50 "mount | grep nfs"
# Should show:
# 10.89.97.10:/vault/subvol-101-disk-0/media on /mnt/media type nfs4
# 10.89.97.10:/vault/subvol-101-disk-0/media/downloads on /mnt/downloads type nfs4
Check arr-stack Docker containers:
ssh root@10.89.97.50 "docker ps"
ssh root@10.89.97.50 "docker compose -f /opt/arr-stack/docker-compose.yml ps"
Verify media access:
ssh root@10.89.97.50 "ls -la /mnt/media/tv | head"
ssh root@10.89.97.50 "ls -la /mnt/downloads | head"
Network Discovery (Avahi/mDNS)¶
Network discovery allows devices like Apple TV, smart TVs, and other media players to automatically find and display your NAS in their file browser. Different devices use different discovery protocols:
| Protocol | Used By | Service |
|---|---|---|
| mDNS/Bonjour | Apple TV, macOS, iOS, Chromecast | Avahi daemon |
| NetBIOS/WINS | Windows (legacy), older devices | Samba nmbd |
| WS-Discovery | Windows 10/11 | wsdd daemon |
TurnKey FileServer uses Samba with NetBIOS by default, which is why Windows PCs and some devices see the share, but Apple TV (which uses mDNS/Bonjour) does not.
Installing Avahi for Apple TV / macOS Discovery¶
1. Install Avahi daemon:
2. Create SMB service advertisement:
pct exec 101 -- bash -c "mkdir -p /etc/avahi/services && cat > /etc/avahi/services/smb.service << 'EOF'
<?xml version=\"1.0\" standalone='no'?>
<!DOCTYPE service-group SYSTEM \"avahi-service.dtd\">
<service-group>
<name replace-wildcards=\"yes\">%h</name>
<service>
<type>_smb._tcp</type>
<port>445</port>
</service>
</service-group>
EOF"
3. Start and enable Avahi:
4. Verify Avahi is running:
pct exec 101 -- systemctl status avahi-daemon
pct exec 101 -- avahi-browse -a -t # List all discovered services
The NAS will now broadcast as the container's hostname (e.g., nas) via mDNS.
Adding Device-Specific Service Files¶
For AFP (Apple Filing Protocol) - Legacy macOS:
pct exec 101 -- bash -c "cat > /etc/avahi/services/afp.service << 'EOF'
<?xml version=\"1.0\" standalone='no'?>
<!DOCTYPE service-group SYSTEM \"avahi-service.dtd\">
<service-group>
<name replace-wildcards=\"yes\">%h</name>
<service>
<type>_afpovertcp._tcp</type>
<port>548</port>
</service>
</service-group>
EOF"
For Time Machine backups:
pct exec 101 -- bash -c "cat > /etc/avahi/services/timemachine.service << 'EOF'
<?xml version=\"1.0\" standalone='no'?>
<!DOCTYPE service-group SYSTEM \"avahi-service.dtd\">
<service-group>
<name replace-wildcards=\"yes\">%h Time Machine</name>
<service>
<type>_adisk._tcp</type>
<port>9</port>
<txt-record>sys=waMa=0,adVF=0x100</txt-record>
<txt-record>dk0=adVN=TimeMachine,adVF=0x82</txt-record>
</service>
<service>
<type>_smb._tcp</type>
<port>445</port>
</service>
</service-group>
EOF"
Reload Avahi after adding services:
Installing WS-Discovery for Windows 10/11¶
Windows 10/11 deprecated NetBIOS browsing. For automatic discovery in Windows File Explorer:
Verify wsdd is running:
Customizing the Broadcast Hostname¶
The hostname broadcast via mDNS is the container's hostname. To change it:
# Check current hostname
pct exec 101 -- hostname
# Change hostname (example: FILESERVER)
pct exec 101 -- hostnamectl set-hostname fileserver
# Restart Avahi to pick up the change
pct exec 101 -- systemctl restart avahi-daemon
Note: Changing hostname may require updating /etc/hosts as well.
Verifying Discovery from Other Devices¶
From macOS:
# List all SMB services on network
dns-sd -B _smb._tcp
# Resolve specific host
dns-sd -G v4 nas.local
From Linux:
# Install avahi-utils if needed
apt install avahi-utils
# Browse for SMB services
avahi-browse -rt _smb._tcp
From Apple TV: 1. Open Computers app (or Videos → Library → Computers) 2. NAS should appear with hostname (e.g., "nas") 3. Select and authenticate with Samba credentials
From Windows:
Troubleshooting Network Discovery¶
NAS not appearing on Apple TV:
-
Check Avahi is running:
-
Check SMB service file exists:
-
Verify mDNS is broadcasting:
-
Check firewall isn't blocking mDNS (port 5353/UDP):
-
Restart all discovery services:
NAS appears but can't connect:
-
Verify Samba is running:
-
Check share configuration:
-
Test authentication:
Multiple devices showing same name:
Each device advertising SMB needs a unique hostname. Check for conflicts:
Complete Service Stack for Maximum Compatibility¶
For full cross-platform discovery (Apple TV, macOS, Windows, Linux, smart TVs):
# Install all discovery services
pct exec 101 -- apt install -y avahi-daemon wsdd
# Create SMB service file
pct exec 101 -- bash -c "cat > /etc/avahi/services/smb.service << 'EOF'
<?xml version=\"1.0\" standalone='no'?>
<!DOCTYPE service-group SYSTEM \"avahi-service.dtd\">
<service-group>
<name replace-wildcards=\"yes\">%h</name>
<service>
<type>_smb._tcp</type>
<port>445</port>
</service>
</service-group>
EOF"
# Enable all services
pct exec 101 -- systemctl enable --now dbus avahi-daemon wsdd smbd nmbd
# Verify all running
pct exec 101 -- systemctl status avahi-daemon wsdd smbd nmbd --no-pager
Services overview:
| Service | Port | Protocol | Discovery Method |
|---|---|---|---|
| smbd | 445 | TCP | File sharing (SMB/CIFS) |
| nmbd | 137-138 | UDP | NetBIOS name service |
| avahi-daemon | 5353 | UDP | mDNS/Bonjour |
| wsdd | 3702 | UDP | WS-Discovery |
UID/GID Mapping & Permissions¶
Understanding LXC UID Mapping¶
Unprivileged containers use UID/GID mapping for security: - Container UID 0 (root) → Host UID 100000 - Container UID 1 → Host UID 100001 - etc.
This prevents container root from having root access on the host.
View current mapping:
# Check if container is unprivileged
pct config 101 | grep unprivileged
# unprivileged: 1
# View host UID/GID mapping configuration
cat /etc/subuid
cat /etc/subgid
Fixing Arr-Stack Permission Errors¶
Problem: Sonarr/Radarr/Lidarr can't delete/modify media files
Symptoms:
System.UnauthorizedAccessException: Access to the path '/data/media/tv/...' is denied.
---> System.IO.IOException: Permission denied
Root cause: UID mapping mismatch
- Arr-stack containers run as UID 1000 → create files as 1000:1000 on VM → 1000:1000 on host (via NFS)
- Samba file copies created as root in LXC 101 → become 100000:100000 on host
- Arr apps can't modify files they don't own
Solution: Custom UID mapping for LXC 101
This maps container UID 1000 → host UID 1000 directly, ensuring all sources create files with consistent ownership.
Step 1: Backup and stop LXC 101
# Backup configuration
pct config 101 > /root/lxc-101-config-backup-$(date +%Y%m%d-%H%M%S).txt
# Backup Samba config
pct exec 101 -- cp /etc/samba/smb.conf /etc/samba/smb.conf.backup
# Stop container
pct stop 101
Step 2: Enable UID mapping in Proxmox
# Enable root:1000:1 mapping in subuid/subgid
sed -i 's/#root:1000:1/root:1000:1/' /etc/subuid /etc/subgid
# Verify
cat /etc/subuid /etc/subgid
# Should show:
# root:100000:65536
# root:1000:1
Step 3: Add custom UID mapping to LXC 101
Edit /etc/pve/lxc/101.conf and add after unprivileged: 1:
lxc.idmap: u 0 100000 1000
lxc.idmap: u 1000 1000 1
lxc.idmap: u 1001 101001 64535
lxc.idmap: g 0 100000 1000
lxc.idmap: g 1000 1000 1
lxc.idmap: g 1001 101001 64535
Explanation:
- u 0 100000 1000 - Map container UIDs 0-999 → host UIDs 100000-100999
- u 1000 1000 1 - Map container UID 1000 → host UID 1000 (direct mapping)
- u 1001 101001 64535 - Map container UIDs 1001-65535 → host UIDs 101001-165535
- Same for GIDs (g instead of u)
Step 4: Start and verify
# Start container
pct start 101
# Verify UID mapping is working
pct exec 101 -- id jake
# Should show: uid=1000(jake) gid=1000(admin)
# Check how media appears inside container
pct exec 101 -- stat -c "%U:%G (%u:%g)" /mnt/vault/media/tv
# Should show: jake:admin (1000:1000)
Step 5: Fix jake's primary group
# Change jake's primary group from root (0) to admin (1000)
pct exec 101 -- usermod -g 1000 jake
# Verify
pct exec 101 -- id jake
# Should show: uid=1000(jake) gid=1000(admin) groups=1000(admin),100(users)
Step 6: Update Samba to force jake user
Update /etc/samba/smb.conf vault section:
pct exec 101 -- bash -c "cat > /tmp/vault_section.txt << 'EOF'
[vault]
force user = jake
force group = admin
valid users = root,jake,@root
write list = root,jake,@root
create mode = 0664
path = /mnt/vault
directory mode = 0775
writeable = yes
EOF"
# Replace vault section
pct exec 101 -- bash -c "sed -i '/^\[vault\]/,/^$/d' /etc/samba/smb.conf && cat /tmp/vault_section.txt >> /etc/samba/smb.conf"
# Verify
pct exec 101 -- grep -A 10 "^\[vault\]" /etc/samba/smb.conf
# Restart Samba
pct exec 101 -- systemctl restart smbd nmbd
Step 7: Fix existing file ownership
# Change all media files to 1000:1000
# This ensures arr-stack can manage all existing files
ssh root@10.89.97.50 "chown -R 1000:1000 /mnt/media"
# This may take several minutes depending on file count
# You can run in background:
ssh root@10.89.97.50 "nohup chown -R 1000:1000 /mnt/media &"
Step 8: Reconnect Samba clients
Existing Samba connections use old configuration. Clients must disconnect and reconnect:
macOS/Windows:
1. Eject/unmount the share
2. Wait 5 seconds
3. Reconnect: smb://10.89.97.89/vault
4. Authenticate (as root or jake - doesn't matter, force user = jake applies to all)
5. Create test file - verify ownership
Verification:
Test file creation from all three sources:
# 1. Inside LXC 101 as jake
pct exec 101 -- su -s /bin/sh jake -c 'touch /mnt/vault/media/test-lxc.txt'
ls -l /vault/subvol-101-disk-0/media/test-lxc.txt
# Expected: -rw-r--r-- 1 1000 1000 0 ... test-lxc.txt
# 2. From arr-stack (Sonarr container)
ssh root@10.89.97.50 "docker exec sonarr su -s /bin/sh abc -c 'touch /data/media/test-sonarr.txt'"
ls -l /vault/subvol-101-disk-0/media/test-sonarr.txt
# Expected: -rw-r--r-- 1 1000 1000 0 ... test-sonarr.txt
# 3. From Samba (create file via macOS/Windows share)
# Then check from host:
ls -l /vault/subvol-101-disk-0/media/[your-test-file]
# Expected: -rw-rw-r-- 1 1000 1000 ... [your-test-file]
# Cleanup
rm -f /vault/subvol-101-disk-0/media/test-*.txt
All three methods should create files as 1000:1000 on the host.
Result: No more permission errors! All file sources create consistent ownership, arr-stack can manage all media files.
Persistence: These changes are permanent and survive container/host restarts.
VM Management¶
Basic VM Operations¶
List all VMs:
Start/stop/restart VMs:
View VM configuration:
SSH to VMs:
# VM 100 (arr-stack)
ssh root@10.89.97.50
# K3s master
ssh root@10.89.97.201
# K3s worker nodes
ssh root@10.89.97.202
ssh root@10.89.97.203
Check VM resource usage:
Network Operations¶
Network Diagnostics¶
Check container/VM IPs:
# LXC containers
pct list | grep -E "VMID|101|150|160"
# VMs
qm list
# Ping test
ping -c 3 10.89.97.89 # LXC 101
ping -c 3 10.89.97.50 # VM 100
ping -c 3 10.89.97.201 # K3s master
Check network configuration:
# LXC container network config
pct config 101 | grep net0
# VM network config
qm config 100 | grep net0
Test connectivity between services:
# From Proxmox host to LXC
pct exec 101 -- hostname -I
# From VM to LXC NFS
ssh root@10.89.97.50 "showmount -e 10.89.97.10"
ssh root@10.89.97.50 "mount | grep nfs"
Backup & Recovery¶
LXC Container Backups¶
Manual backup:
# Backup LXC 101 configuration
pct config 101 > /root/lxc-101-config-backup-$(date +%Y%m%d-%H%M%S).txt
# Full container backup (creates .tar.gz)
vzdump 101 --compress gzip --dumpdir /var/lib/vz/dump/
Restore from backup:
# List available backups
ls -lh /var/lib/vz/dump/
# Restore to new container ID
pct restore 101 /var/lib/vz/dump/vzdump-lxc-101-*.tar.gz
Important Configuration Backups¶
Before making changes, always backup:
# LXC configuration
pct config 101 > /root/backups/lxc-101-$(date +%Y%m%d).conf
# Samba configuration
pct exec 101 -- cp /etc/samba/smb.conf /etc/samba/smb.conf.$(date +%Y%m%d)
# NFS exports
pct exec 101 -- cp /etc/exports /etc/exports.$(date +%Y%m%d)
Common Operations Quick Reference¶
File Permission Debugging¶
# Check file ownership on host
ls -l /vault/subvol-101-disk-0/media/tv/[show-name]/
# Check file ownership inside LXC 101
pct exec 101 -- ls -l /mnt/vault/media/tv/[show-name]/
# Check file ownership from arr-stack VM
ssh root@10.89.97.50 "ls -l /mnt/media/tv/[show-name]/"
# Check what user created a file (stat)
stat -c "%U:%G (%u:%g)" /vault/subvol-101-disk-0/media/tv/file.mkv
Samba Operations¶
# Check active Samba connections
pct exec 101 -- smbstatus --brief
# View Samba logs
pct exec 101 -- tail -f /var/log/samba/samba.log
# Test Samba configuration
pct exec 101 -- testparm -s /etc/samba/smb.conf
# Restart Samba
pct exec 101 -- systemctl restart smbd nmbd
Arr-Stack Docker Operations¶
# SSH to VM 100
ssh root@10.89.97.50
# View all containers
docker ps
# View container logs
docker logs sonarr --tail 50
docker logs radarr --tail 50
# Restart container
docker restart sonarr
# Execute command in container
docker exec sonarr id
docker exec sonarr ls -la /config
Troubleshooting Tips¶
Permission errors in arr-stack¶
- Check UID mapping:
pct config 101 | grep lxc.idmap - Check file ownership: Compare ownership on host vs inside container vs arr-stack VM
- Check Samba config: Verify
force user = jakeis set - Check jake's group:
pct exec 101 -- id jakeshould show gid=1000 - Reconnect Samba clients: Existing connections won't use new config
Samba shares not accessible¶
- Check Samba is running:
pct exec 101 -- systemctl status smbd - Check firewall:
pct exec 101 -- iptables -L - Test from host:
smbclient -L 10.89.97.89 -U jake - Check configuration:
pct exec 101 -- testparm -s
NFS mount issues on VM 100¶
- Check NFS is running:
pct exec 101 -- systemctl status nfs-server - Check exports:
pct exec 101 -- exportfs -v - Test from VM:
ssh root@10.89.97.50 "showmount -e 10.89.97.10" - Remount:
ssh root@10.89.97.50 "mount -a"
Related Documentation¶
- Troubleshooting Guide - Kubernetes-specific issues
- Storage Verification - Longhorn and PVC issues
- Disaster Recovery - Backup and recovery procedures