Skip to content

Tower Fleet Documentation

Project: Multi-node k3s Kubernetes cluster on Proxmox Started: 2025-11-09 Status: Phase 1 & 1.5 Complete (k3s cluster operational and tested) Location: /root/k8s/


Web Documentation: https://otterwiki.bogocat.com (OtterWiki on K8s)


Documentation Structure

Getting Started

  1. Infrastructure Overview - Architecture, physical to application layers
  2. Kubernetes Overview - K3s cluster details
  3. kubectl Guide - Understanding kubectl and kubeconfig

Fundamentals

Implementation Guides

  1. 03-core-infrastructure.md (Coming soon) - MetalLB, Longhorn, cert-manager
  2. 04-gitops-flux.md (Coming soon) - Flux setup, GitOps workflow
  3. 05-observability.md (Coming soon) - Prometheus, Grafana, Loki stack
  4. 06-app-deployments.md (Coming soon) - Deploying Next.js apps to k8s

Reference

Design Documents

Architecture Decision Records (ADRs)


Current Status

Last Updated: 2025-11-16

✅ Completed

  • [x] Architecture design (multi-node k3s)
  • [x] VM creation (3 VMs: 201-master, 202-worker-1, 203-worker-2)
  • [x] k3s installation (v1.33.5+k3s1)
  • [x] kubectl access from Proxmox host
  • [x] Cluster health verification (all nodes Ready)
  • [x] Core infrastructure deployment (MetalLB, Longhorn, cert-manager)
  • [x] Observability stack (Prometheus, Grafana, Loki, Promtail)
  • [x] Application migrations (home-portal, money-tracker)
  • [x] Sealed Secrets
  • [x] Kubernetes Dashboard
  • [x] Docker Registry
  • [x] Supabase shared instance

🚧 In Progress

  • [ ] Documentation completion
  • [ ] Disaster recovery procedures

📋 Planned

  • [ ] Flux GitOps setup
  • [ ] Additional application migrations (RMS, SubtitleAI)

Cluster Information

Nodes: - k3s-master (10.89.97.201) - 4 cores, 8GB RAM, 80GB disk - k3s-worker-1 (10.89.97.202) - 4 cores, 8GB RAM, 80GB disk - k3s-worker-2 (10.89.97.203) - 4 cores, 8GB RAM, 80GB disk

Network: - Cluster IPs: 10.89.97.201-203 - Service IP Pool: 10.89.97.210-220 (MetalLB) - Gateway: 10.89.97.1

Access:

# From Proxmox host
kubectl get nodes

# SSH to nodes
ssh root@10.89.97.201  # master
ssh root@10.89.97.202  # worker-1
ssh root@10.89.97.203  # worker-2


Quick Start

Check cluster health:

kubectl get nodes
kubectl get pods -A

Deploy a test app:

kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer
kubectl get svc

View logs:

kubectl logs -f deployment/nginx


Getting Help


Next Steps

Continue with 03-core-infrastructure.md to install: - MetalLB (LoadBalancer services) - Longhorn (distributed storage) - cert-manager (SSL certificates)