MetalLB IP Pool Management¶
Purpose: Manage MetalLB LoadBalancer IP pool and allocations
Last Updated: 2025-12-03
Overview¶
MetalLB provides LoadBalancer services for bare-metal Kubernetes clusters by allocating IP addresses from a configured pool.
Current Configuration:
- Pool Range: 10.89.97.210 - 10.89.97.230 (21 IPs total)
- Namespace: metallb-system
- Mode: Layer 2 (ARP)
Check Pool Status¶
View Pool Configuration¶
Key fields:
spec:
addresses:
- 10.89.97.210-10.89.97.220 # IP range
autoAssign: true # Automatically assign IPs
status:
assignedIPv4: 11 # Number of IPs currently assigned
List All LoadBalancer Services¶
Example output:
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
supabase kong LoadBalancer 10.43.150.110 10.89.97.214 8000:32697/TCP
supabase studio LoadBalancer 10.43.103.113 10.89.97.215 3000:31569/TCP
monitoring prometheus LoadBalancer 10.43.163.230 10.89.97.216 9090:31024/TCP
...
Count Allocated IPs¶
# Total LoadBalancers
kubectl get svc -A | grep LoadBalancer | wc -l
# Assigned IPs (excludes <pending>)
kubectl get svc -A -o json | jq '.items[] | select(.spec.type=="LoadBalancer") | select(.status.loadBalancer.ingress != null) | .status.loadBalancer.ingress[0].ip' | wc -l
Common Scenarios¶
Scenario 1: Pool Exhausted (No IPs Available)¶
Symptom:
Diagnosis:
# Check if all IPs allocated
kubectl get ipaddresspool -n metallb-system -o yaml | grep assignedIPv4
# List what's using IPs
kubectl get svc -A | grep LoadBalancer | grep -v pending
Solutions:
Option A: Expand IP Pool
# Edit IPAddressPool
kubectl edit ipaddresspool -n metallb-system
# Change:
addresses:
- 10.89.97.210-10.89.97.220 # Old range (11 IPs)
# To:
addresses:
- 10.89.97.210-10.89.97.230 # New range (21 IPs)
Option B: Use ClusterIP Instead
For services that don't need external access (like sandbox environments):
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
type: ClusterIP # Changed from LoadBalancer
ports:
- port: 8000
Access via port-forward:
Option C: Delete Unused LoadBalancers
# Find unused services
kubectl get svc -A | grep LoadBalancer
# Delete if not needed
kubectl delete svc -n <namespace> <service-name>
Scenario 2: Specific IP Assignment¶
Request specific IP from pool:
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
type: LoadBalancer
loadBalancerIP: 10.89.97.225 # Must be in pool range
ports:
- port: 8000
⚠️ Warning: If IP is already allocated, service will remain <pending>.
Scenario 3: Service Stuck in Pending¶
Troubleshooting:
# 1. Check MetalLB pods running
kubectl get pods -n metallb-system
# 2. Check service events
kubectl describe svc -n <namespace> <service-name>
# 3. Check MetalLB controller logs
kubectl logs -n metallb-system -l app=metallb,component=controller
# 4. Check MetalLB speaker logs
kubectl logs -n metallb-system -l app=metallb,component=speaker
Common causes: 1. Pool exhausted (see Scenario 1) 2. Requested IP outside pool range 3. Requested IP already allocated 4. MetalLB pods not running
IP Allocation Audit¶
Current Allocations (as of 2025-12-03)¶
| IP | Service | Namespace | Purpose |
|---|---|---|---|
| 10.89.97.210 | longhorn-ui-lb | longhorn-system | Longhorn Dashboard |
| 10.89.97.211 | grafana | monitoring | Grafana Dashboard |
| 10.89.97.212 | kubernetes-dashboard | kubernetes-dashboard | K8s Dashboard |
| 10.89.97.213 | subtitleai-web | subtitleai | SubtitleAI App |
| 10.89.97.214 | kong | supabase | Supabase API Gateway |
| 10.89.97.215 | studio | supabase | Supabase Studio |
| 10.89.97.216 | prometheus | monitoring | Prometheus |
| 10.89.97.217 | alertmanager | monitoring | Alertmanager |
| 10.89.97.218 | ldap-outpost | authentik | Authentik LDAP |
| 10.89.97.219 | trip-planner | trip-planner | Trip Planner App |
| 10.89.97.220 | ingress-nginx | ingress-nginx | Nginx Ingress |
| 10.89.97.221 | kong | supabase-sandbox | Sandbox Supabase API |
Status: 12/21 IPs allocated (pool expanded 2025-12-04)
Generate Current Report¶
#!/bin/bash
# Save as /root/scripts/metallb-report.sh
echo "=== MetalLB IP Pool Status ==="
echo ""
# Pool configuration
kubectl get ipaddresspool -n metallb-system -o yaml | grep -A 2 "addresses:"
echo ""
echo "=== Allocated IPs ==="
kubectl get svc -A -o custom-columns='IP:.status.loadBalancer.ingress[0].ip,NAMESPACE:.metadata.namespace,SERVICE:.metadata.name,PORTS:.spec.ports[*].port' | grep -v '<none>' | sort
echo ""
echo "=== Summary ==="
TOTAL_LB=$(kubectl get svc -A | grep LoadBalancer | wc -l)
ASSIGNED=$(kubectl get svc -A -o json | jq '.items[] | select(.spec.type=="LoadBalancer") | select(.status.loadBalancer.ingress != null)' | grep -c '"ip"')
PENDING=$((TOTAL_LB - ASSIGNED))
echo "Total LoadBalancers: $TOTAL_LB"
echo "Assigned IPs: $ASSIGNED"
echo "Pending: $PENDING"
Best Practices¶
When to Use LoadBalancer¶
✅ Good use cases: - Production applications needing external access - Services accessed by users outside the cluster - Services needing stable, predictable IPs
❌ Avoid LoadBalancer for: - Internal-only services (use ClusterIP) - Development/sandbox environments (use ClusterIP + port-forward) - Temporary testing (use port-forward)
When to Use ClusterIP¶
✅ Good use cases: - Internal microservices communication - Sandbox/staging environments - Services accessed via Ingress - Dev/test environments
Access ClusterIP services:
# Port-forward for testing
kubectl port-forward svc/<service-name> <local-port>:<service-port>
# Or use Ingress for HTTP/HTTPS
IP Pool Planning¶
Recommended pool size:
Production services + 20% buffer
Example:
- 8 production services
- 2 monitoring services
- 2 infrastructure services
= 12 services → allocate 15 IPs (20% buffer)
Expand pool before exhaustion:
- Monitor allocation: kubectl get ipaddresspool -n metallb-system
- Expand when 80% full
- Document IP assignments
Expanding the IP Pool¶
Current range: 10.89.97.210-230 (21 IPs)
To expand further (example to .240):
# 1. Edit IPAddressPool
kubectl edit ipaddresspool -n metallb-system
# 2. Update addresses
spec:
addresses:
- 10.89.97.210-10.89.97.230 # Expanded from 220
# 3. Verify
kubectl get ipaddresspool -n metallb-system -o yaml | grep -A 2 "addresses:"
# 4. Test with new service
kubectl create deployment test --image=nginx
kubectl expose deployment test --type=LoadBalancer --port=80
# 5. Check IP assigned
kubectl get svc test
# Should show IP in range 221-230
# 6. Cleanup
kubectl delete svc test
kubectl delete deployment test
⚠️ Ensure IPs are available on network:
- Check router DHCP range doesn't overlap
- Ping each IP to verify not in use: ping -c 1 10.89.97.221
- Update DNS if needed
Migrating LoadBalancer → ClusterIP¶
For non-production services (e.g., sandbox):
# 1. Get current service config
kubectl get svc -n <namespace> <service> -o yaml > service-backup.yaml
# 2. Edit service
kubectl edit svc -n <namespace> <service>
# 3. Change type
spec:
type: ClusterIP # Changed from LoadBalancer
# Remove loadBalancerIP field if present
# 4. Verify
kubectl get svc -n <namespace> <service>
# EXTERNAL-IP should show <none>
# 5. Access via port-forward
kubectl port-forward -n <namespace> svc/<service> <local-port>:<service-port>
Migrating ClusterIP → Ingress¶
For HTTP/HTTPS services:
# 1. Ensure service is ClusterIP
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: ClusterIP
ports:
- port: 80
---
# 2. Create Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp
namespace: myapp
spec:
ingressClassName: nginx
rules:
- host: myapp.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp
port:
number: 80
Benefits: - One LoadBalancer IP for all Ingress services (saves IPs) - HTTP routing, SSL termination - Path-based routing
Troubleshooting Commands¶
# Check MetalLB installation
kubectl get all -n metallb-system
# View pool config
kubectl get ipaddresspool -n metallb-system -o yaml
# Check which IPs are in use
kubectl get svc -A | grep LoadBalancer
# Check MetalLB logs
kubectl logs -n metallb-system -l app=metallb,component=controller
kubectl logs -n metallb-system -l app=metallb,component=speaker
# Force service to get new IP (if stuck)
kubectl delete svc -n <namespace> <service>
kubectl apply -f <service-manifest>
# Check L2Advertisement (Layer 2 mode)
kubectl get l2advertisement -n metallb-system -o yaml
Related Documentation¶
- Sandbox Deployment - Uses ClusterIP to avoid pool exhaustion
- Ingress Configuration - Alternative to LoadBalancer for HTTP services
- MetalLB Official Docs
Summary¶
Key Takeaways:
- ✅ Monitor pool usage: kubectl get ipaddresspool -n metallb-system
- ✅ Use ClusterIP for internal/dev services
- ✅ Use Ingress for HTTP services (saves IPs)
- ✅ Expand pool before exhaustion (at 80% full)
- ✅ Document IP allocations
- ✅ Regular audits to free unused IPs