Skip to content

VPS Reverse Proxy Setup

This guide documents the VPS-based reverse proxy architecture for exposing homelab services externally via bogocat.com.

Architecture Overview

Internet → Hetzner VPS (Caddy) → WireGuard → OPNsense → Internal Services
           5.161.45.147              10.22.95.0/24      10.89.97.0/24
                ↓                         ↓                  ↓
         Public HTTPS              VPN Tunnel          K8s Ingress, LXCs, VMs
         *.bogocat.com

Why this approach? - Home IP stays hidden (VPS public IP in DNS) - No port forwarding on home router - DDoS buffer (attack hits VPS, not home connection) - Professional appearance with static IP

Current Configuration

Service External URL Backend Auth
Authentik https://auth.bogocat.com K8s Ingress (auth.bogocat.com) Public
Home Portal https://portal.bogocat.com K8s Ingress (portal.bogocat.com) Authentik OAuth
Supabase Storage https://storage.bogocat.com K8s Ingress (storage.bogocat.com) Public (RLS)
Jellyfin https://jellyfin.bogocat.com LXC 113 (10.89.97.97:8096) Native
Jellyseerr https://jellyseerr.bogocat.com K8s Ingress (jellyseerr.bogocat.com) Authentik

Infrastructure Details

VPS: Hetzner CX22

Setting Value
IP 5.161.45.147
Type CX22 (2 vCPU, 4GB RAM)
OS Debian 13 (Trixie)
Cost ~€4/month
Location Ashburn, VA

Domain: bogocat.com (Cloudflare DNS)

DNS Configuration (in Cloudflare dashboard):

@    A    5.161.45.147
*    A    5.161.45.147

WireGuard Tunnel

Uses existing OPNsense WireGuard (10.22.95.0/24 network):

Endpoint Tunnel IP Role
VPS 10.22.95.10 Client (listens on 51820)
OPNsense 10.22.95.1 Server

VPS can reach all of 10.89.97.0/24 through the tunnel.


Setup Guide

Phase 1: VPS Provisioning

  1. Create Hetzner Cloud account: https://console.hetzner.cloud/
  2. Add server: Debian 12/13, CX22, add SSH key
  3. Note public IP for DNS
# Initial setup
ssh root@<VPS_IP>
apt update && apt upgrade -y
apt install -y wireguard caddy
hostnamectl set-hostname vps-gateway

Phase 2: WireGuard Configuration

VPS side (/etc/wireguard/wg0.conf):

[Interface]
Address = 10.22.95.10/32
ListenPort = 51820
PrivateKey = <GENERATED_PRIVATE_KEY>

PostUp = sysctl -w net.ipv4.ip_forward=1

[Peer]
# OPNsense homelab
PublicKey = /gI0Tdq64dHHVtg8dmC4mAYBolDPTiuyA2h8KlQ3xCM=
AllowedIPs = 10.22.95.0/24, 10.89.97.0/24
PersistentKeepalive = 25

OPNsense side:

  1. VPN → WireGuard → Peers → Add
  2. Name: vps-gateway
  3. Public Key: <VPS_PUBLIC_KEY>
  4. Allowed IPs: 10.22.95.10/32
  5. Endpoint: 5.161.45.147:51820
  6. Keepalive: 25
  7. Apply configuration
# Enable on VPS
systemctl enable --now wg-quick@wg0

# Verify tunnel
wg show
ping 10.22.95.1      # OPNsense
ping 10.89.97.220    # K8s Ingress

Phase 3: Caddy Configuration

Production Caddyfile (/etc/caddy/Caddyfile):

Note: Caddy v2 handles WebSocket upgrades automatically. Do NOT add explicit Connection/Upgrade header manipulation - it actually breaks WebSockets.

{
    email admin@bogocat.com
}

# Authentik - SSO Provider
auth.bogocat.com {
    reverse_proxy 10.89.97.220:80 {
        header_up Host auth.bogocat.com
        header_up X-Forwarded-Host auth.bogocat.com
        header_up X-Forwarded-Proto https
        header_up X-Real-IP {remote_host}
    }
}

# Home Portal - Dashboard
portal.bogocat.com {
    reverse_proxy 10.89.97.220:80 {
        header_up Host home.internal
        header_up X-Forwarded-Host portal.bogocat.com
        header_up X-Forwarded-Proto https
    }
}

# Jellyfin - Media (native auth)
jellyfin.bogocat.com {
    reverse_proxy 10.89.97.97:8096 {
        header_up X-Forwarded-Host jellyfin.bogocat.com
        header_up X-Forwarded-Proto https
    }
}

# Jellyseerr - Media Requests
jellyseerr.bogocat.com {
    reverse_proxy 10.89.97.220:80 {
        header_up Host jellyseerr.bogocat.com
        header_up X-Forwarded-Host jellyseerr.bogocat.com
        header_up X-Forwarded-Proto https
    }
}
# Deploy
caddy validate --config /etc/caddy/Caddyfile
systemctl reload caddy

Phase 4: Kubernetes Configuration

External Ingress for Authentik

Authentik requires an Ingress that accepts its external hostname (so it generates correct URLs):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: authentik-external
  namespace: authentik
  annotations:
    nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
    nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
spec:
  ingressClassName: nginx
  rules:
  - host: auth.bogocat.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: authentik-server
            port:
              number: 80

nginx-ingress Forwarded Headers

Critical: nginx-ingress must trust X-Forwarded-* headers from Caddy:

kubectl patch configmap ingress-nginx-controller -n ingress-nginx \
  --type merge \
  -p '{"data":{"use-forwarded-headers":"true","compute-full-forwarded-for":"true"}}'

kubectl rollout restart deployment -n ingress-nginx ingress-nginx-controller

Without this, Authentik generates http:// URLs instead of https://.

Authentik Environment Variables

Set browser-facing URL for consistent URL generation:

kubectl set env deployment/authentik-server -n authentik \
  AUTHENTIK_HOST_BROWSER="https://auth.bogocat.com"
kubectl set env deployment/authentik-worker -n authentik \
  AUTHENTIK_HOST_BROWSER="https://auth.bogocat.com"

Phase 5: Authentik UI Configuration

  1. System → Brands: Set domain to auth.bogocat.com
  2. Applications → Providers → home-portal: Add redirect URI:
    https://portal.bogocat.com/api/auth/callback/authentik
    

Adding New Services

Service via K8s Ingress

For services already in K8s with an internal Ingress:

newservice.bogocat.com {
    reverse_proxy 10.89.97.220:80 {
        header_up Host newservice.internal
        header_up X-Forwarded-Host newservice.bogocat.com
        header_up X-Forwarded-Proto https
    }
}

Service on LXC/VM

For services running outside K8s:

calibre.bogocat.com {
    reverse_proxy 10.89.97.XX:PORT {
        header_up X-Forwarded-Host calibre.bogocat.com
        header_up X-Forwarded-Proto https
    }
}

With Authentik Forward Auth

For services without native auth:

protected.bogocat.com {
    forward_auth 10.89.97.220:80 {
        uri /outpost.goauthentik.io/auth/caddy
        copy_headers X-Authentik-Username X-Authentik-Groups X-Authentik-Email X-Authentik-Uid
        header_up Host protected.internal
        header_up X-Forwarded-Host protected.bogocat.com
        header_up X-Original-URL https://protected.bogocat.com{uri}
    }
    reverse_proxy 10.89.97.220:80 {
        header_up Host protected.internal
        header_up X-Forwarded-Proto https
    }
}

Maintenance

VPS Access

# From Proxmox host
ssh root@5.161.45.147

# Or via tunnel IP
ssh root@10.22.95.10

Check Status

# WireGuard tunnel
ssh root@5.161.45.147 "wg show"

# Caddy logs
ssh root@5.161.45.147 "journalctl -u caddy -f"

# Test services
curl -I https://auth.bogocat.com
curl -I https://jellyfin.bogocat.com

Update Caddy Config

Preferred: Edit in git, then sync to VPS

# 1. Edit the production Caddyfile in tower-fleet
vim /root/tower-fleet/manifests/vps/Caddyfile.production

# 2. Commit changes
cd /root/tower-fleet && git add -A && git commit -m "chore: update VPS Caddyfile" && git push

# 3. Sync to VPS and reload
scp /root/tower-fleet/manifests/vps/Caddyfile.production root@5.161.45.147:/etc/caddy/Caddyfile
ssh root@5.161.45.147 "caddy reload --config /etc/caddy/Caddyfile"

Quick ad-hoc edit (remember to sync back to git)

ssh root@5.161.45.147 "nano /etc/caddy/Caddyfile && caddy validate --config /etc/caddy/Caddyfile && systemctl reload caddy"

Troubleshooting

OAuth 502 Bad Gateway on Callback

Symptom: Login redirects to Authentik successfully, but callback returns 502 Bad Gateway

Cause: nginx-ingress proxy buffer too small for OAuth response headers

Fix: Add buffer annotations to the app's Ingress:

annotations:
  nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
  nginx.ingress.kubernetes.io/proxy-buffers-number: "4"

Log signature:

upstream sent too big header while reading response header from upstream

Authentik Mixed Content Errors

Symptom: Browser console shows Mixed Content: http://... errors

Cause: nginx-ingress not forwarding X-Forwarded-Proto header

Fix:

kubectl patch configmap ingress-nginx-controller -n ingress-nginx \
  --type merge -p '{"data":{"use-forwarded-headers":"true"}}'
kubectl rollout restart deployment -n ingress-nginx ingress-nginx-controller

Authentik WebSocket Errors

Symptom: Firefox can't establish a connection to the server at wss://auth.bogocat.com/ws/client/

Cause: Explicit WebSocket header manipulation in Caddy breaks automatic WebSocket upgrades

Fix: Remove any header_up Connection and header_up Upgrade directives from Caddyfile. Caddy v2 handles WebSocket upgrades automatically - explicit manipulation breaks it.

# Check current config
ssh root@5.161.45.147 "cat /etc/caddy/Caddyfile"

# If it has Connection/Upgrade headers, use the production config from git:
scp /root/tower-fleet/manifests/vps/Caddyfile.production root@5.161.45.147:/etc/caddy/Caddyfile
ssh root@5.161.45.147 "caddy reload --config /etc/caddy/Caddyfile"

Tunnel Not Connecting

  1. Check OPNsense WireGuard is enabled and applied
  2. Verify VPS public key matches OPNsense peer config
  3. Check wg show on both ends for handshake timestamp
  4. Ensure UDP 51820 is open on VPS firewall

502 Bad Gateway

  1. Check tunnel: ping 10.89.97.220 from VPS
  2. Check target service is running
  3. Verify Host header matches Ingress rule
  4. Check Caddy logs: journalctl -u caddy

Game Server Port Forwarding (Non-HTTP)

For game servers and other non-HTTP services, use iptables instead of Caddy:

Player → VPS:25565 → iptables DNAT → WireGuard → VM:25565

Setup

  1. Add iptables rules on VPS:

    ssh root@vps.bogocat.com '
    iptables -t nat -A PREROUTING -p tcp --dport PORT -j DNAT --to-destination 10.89.97.60:PORT
    iptables -t nat -A PREROUTING -p udp --dport PORT -j DNAT --to-destination 10.89.97.60:PORT
    iptables -A FORWARD -p tcp -d 10.89.97.60 --dport PORT -j ACCEPT
    iptables -A FORWARD -p udp -d 10.89.97.60 --dport PORT -j ACCEPT
    netfilter-persistent save
    '
    

  2. Add port to Hetzner Cloud Firewall (console.hetzner.cloud)

  3. MASQUERADE is required (already configured):

    iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE
    

Without MASQUERADE, return traffic from the VM goes through the default gateway instead of back through WireGuard, breaking the connection.

Current Forwarded Ports

Port Service Destination
8211 Palworld 10.89.97.60 (VM 360)
25565 Minecraft 10.89.97.60 (VM 360)

See Pelican documentation for game server management.


Security Considerations

  • VPS only exposes ports 22, 80, 443, 51820 (WireGuard), plus game ports
  • Hetzner Cloud Firewall provides first layer of filtering
  • All traffic to homelab is encrypted via WireGuard
  • TLS termination at Caddy with Let's Encrypt
  • Home IP never exposed in DNS or logs
  • Game servers isolated in Docker containers on dedicated VM
  • Run hardening script: /root/tower-fleet/manifests/vps/hardening.sh