Skip to content

kubectl & Kubeconfig Guide

Understanding how Kubernetes cluster management works.


What is kubectl?

kubectl = "Kubernetes Control" - the official command-line tool for interacting with Kubernetes clusters.

Think of it like: - docker command → manages Docker containers - kubectl command → manages Kubernetes clusters - git command → manages Git repositories

What it does: - Deploy applications: kubectl apply -f deployment.yaml - View running pods: kubectl get pods - Check logs: kubectl logs pod-name - Scale deployments: kubectl scale deployment/app --replicas=3 - Debug issues: kubectl describe pod/app, kubectl exec -it pod/app -- bash - Delete resources: kubectl delete deployment/app


What's Required to Manage the Cluster?

To run kubectl commands from any machine, you need 3 things:

1. kubectl Binary

The command-line tool itself.

Installation:

# Linux (what we used for Proxmox host)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

# macOS
brew install kubectl

# Windows
# Download from https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/

# Verify
kubectl version --client

2. Kubeconfig File

A YAML configuration file containing: - Cluster API endpoint - where to connect (e.g., https://10.89.97.201:6443) - CA certificate - to verify cluster identity (security) - Client certificate + key - to authenticate as admin (like a password)

3. Network Access

Your machine must be able to reach the k3s master's API server: - Default port: 6443 - For our cluster: 10.89.97.201:6443

Test connectivity:

nc -zv 10.89.97.201 6443
# Should output: Connection to 10.89.97.201 6443 port [tcp/*] succeeded!


Kubeconfig File Locations: Two Conventions

/etc/rancher/k3s/k3s.yaml (k3s-specific)

Where: Only on k3s nodes (master and workers) Why this location? - k3s (made by Rancher) automatically generates this file during installation - /etc/ = Linux standard directory for system-wide configuration files - /etc/rancher/k3s/ = k3s's config directory

This is the "source of truth" - generated and managed by k3s itself.

Contents:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUd...  # CA cert (base64)
    server: https://127.0.0.1:6443              # ← localhost (only works on master)
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: LS0tLS1CRUd...    # Client cert (base64)
    client-key-data: LS0tLS1CRUd...            # Client key (base64)

Problem: server: https://127.0.0.1:6443 only works on the master node itself!

~/.kube/config (Kubernetes standard)

Where: Anywhere you want to run kubectl Why this location? - This is the official Kubernetes convention - kubectl automatically looks here by default - ~/ = user's home directory (user-specific, not system-wide) - Works across all Kubernetes distributions: - k3s (what we're using) - Standard kubeadm k8s - EKS (Amazon) - GKE (Google) - AKS (Azure) - k0s, microk8s, etc.

This is where kubectl expects to find config by default.

For remote access: We copy /etc/rancher/k3s/k3s.yaml and update the server IP:

# Copy from master
scp root@10.89.97.201:/etc/rancher/k3s/k3s.yaml ~/.kube/config

# Update server IP (critical!)
sed -i 's/127.0.0.1/10.89.97.201/g' ~/.kube/config

# Now kubectl works remotely
kubectl get nodes

How kubectl Finds Config

kubectl searches in this order:

  1. $KUBECONFIG environment variable (if set)

    export KUBECONFIG=/path/to/custom/config
    kubectl get nodes  # Uses custom config
    

  2. ~/.kube/config (standard location - most common)

    kubectl get nodes  # Auto-finds ~/.kube/config
    

  3. In-cluster config (only works inside pods running in k8s)

Examples:

# Option A: Standard location (recommended)
# kubectl automatically finds ~/.kube/config
kubectl get nodes

# Option B: Custom location via env var
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
kubectl get nodes

# Option C: Inline override (one-time use)
KUBECONFIG=/some/other/path/config kubectl get nodes

# Option D: Merge multiple configs
export KUBECONFIG=~/.kube/config-cluster1:~/.kube/config-cluster2

Where Can You Manage the Cluster From?

✅ k3s Master (10.89.97.201)

Status: Already works! - Kubeconfig: /etc/rancher/k3s/k3s.yaml - kubectl: Pre-installed (symlink to k3s binary) - Access: ssh root@10.89.97.201 'kubectl get nodes'

No setup needed - k3s handles everything.

✅ Proxmox Host (tower)

Status: Already configured! - Kubeconfig: ~/.kube/config (copied and IP-updated) - kubectl: Installed at /usr/local/bin/kubectl - Access: kubectl get nodes (connects to 10.89.97.201:6443 remotely)

This is our primary management location.

✅ Worker Nodes (10.89.97.202, 10.89.97.203)

Status: Possible, but needs setup - kubectl: Pre-installed (symlink to k3s) - Kubeconfig: Not included by default - Would need to copy from master

Setup (if needed):

# Copy kubeconfig to worker
ssh root@10.89.97.202
mkdir -p ~/.kube
scp root@10.89.97.201:/etc/rancher/k3s/k3s.yaml ~/.kube/config
sed -i 's/127.0.0.1/10.89.97.201/g' ~/.kube/config

# Now kubectl works
kubectl get nodes

Note: This is uncommon - workers are for running workloads, not management.

✅ Your Laptop/Desktop

Status: Easy to set up!

macOS/Linux:

# 1. Install kubectl
brew install kubectl  # macOS
# or use curl method for Linux

# 2. Create kube directory
mkdir -p ~/.kube

# 3. Copy config from Proxmox host (or master)
scp root@PROXMOX_IP:~/.kube/config ~/.kube/config

# 4. Test (make sure you can reach 10.89.97.201:6443 on network)
kubectl get nodes

Windows:

# 1. Install kubectl (download from kubernetes.io)

# 2. Create directory
mkdir $HOME\.kube

# 3. Copy config using WinSCP or scp
# Save to: C:\Users\YourName\.kube\config

# 4. Test
kubectl get nodes

✅ Any Machine on Your Network

Yes, if: - Can reach 10.89.97.201:6443 (test with nc -zv 10.89.97.201 6443) - Has kubectl installed - Has the kubeconfig file


Security Implications

⚠️ Important: The kubeconfig file contains full admin credentials.

Anyone with this file has complete cluster control - equivalent to root access: - Deploy/delete any application - Read all secrets (including passwords, API keys) - Scale/modify any workload - Delete the entire cluster - Access all namespaces

Security Best Practices

For Homelab (what we're doing): - ✅ Keep kubeconfig on trusted machines (Proxmox host, your laptop) - ✅ Set proper file permissions: chmod 600 ~/.kube/config - ⚠️ Never commit to git or share publicly - ⚠️ Don't store in Dropbox/cloud sync folders

For Production/Multi-User: - Create limited kubeconfigs with RBAC (Role-Based Access Control) - Example: Developer gets access to only "dev" namespace, read-only - Use ServiceAccounts for applications - Rotate certificates periodically - Use authentication plugins (OIDC, LDAP)

Creating a limited kubeconfig (future):

# Create ServiceAccount with limited permissions
kubectl create serviceaccount developer
kubectl create rolebinding developer --clusterrole=view --serviceaccount=default:developer

# Extract token and create limited kubeconfig
# (Details in future 06-app-deployments.md)


Managing Multiple Clusters

The ~/.kube/config file can contain multiple clusters:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS...
    server: https://10.89.97.201:6443
  name: tower-fleet
- cluster:
    certificate-authority-data: LS0tLS...
    server: https://prod-cluster.example.com:6443
  name: production

contexts:
- context:
    cluster: tower-fleet
    user: tower-admin
    namespace: default  # optional: default namespace
  name: tower-fleet
- context:
    cluster: production
    user: prod-admin
    namespace: prod
  name: production

current-context: tower-fleet  # ← which cluster you're using now

users:
- name: tower-admin
  user:
    client-certificate-data: LS0tLS...
    client-key-data: LS0tLS...
- name: prod-admin
  user:
    client-certificate-data: LS0tLS...
    client-key-data: LS0tLS...

Commands for multi-cluster management:

# List all contexts (clusters)
kubectl config get-contexts

# Switch to different cluster
kubectl config use-context production
kubectl get nodes  # Now managing production cluster

kubectl config use-context tower-fleet
kubectl get nodes  # Back to homelab

# View current context
kubectl config current-context

# Set default namespace for context
kubectl config set-context --current --namespace=my-app

# One-time context override
kubectl get pods --context=production

Common kubectl Commands

Cluster Information

# View all nodes
kubectl get nodes
kubectl get nodes -o wide  # More details (IPs, OS, kernel)

# Cluster info
kubectl cluster-info

# View API versions
kubectl api-resources
kubectl api-versions

Working with Pods

# List all pods in all namespaces
kubectl get pods -A
kubectl get pods --all-namespaces  # Same thing

# Pods in specific namespace
kubectl get pods -n kube-system

# Detailed pod info
kubectl describe pod/coredns-xxx -n kube-system

# Pod logs
kubectl logs pod-name
kubectl logs -f pod-name  # Follow (tail -f)
kubectl logs pod-name --previous  # Previous container (if crashed)

# Execute command in pod
kubectl exec -it pod-name -- bash
kubectl exec pod-name -- ls /app

Deployments

# List deployments
kubectl get deployments
kubectl get deploy  # Short form

# Scale deployment
kubectl scale deployment/my-app --replicas=5

# Update image
kubectl set image deployment/my-app container-name=new-image:v2

# Rollout status
kubectl rollout status deployment/my-app

# Rollback
kubectl rollout undo deployment/my-app

Services

# List services
kubectl get services
kubectl get svc  # Short form

# Get service details
kubectl describe svc/my-app

# Get LoadBalancer IP
kubectl get svc -o wide

Applying Manifests

# Apply YAML file
kubectl apply -f deployment.yaml

# Apply all files in directory
kubectl apply -f ./manifests/

# Apply from URL
kubectl apply -f https://example.com/manifest.yaml

# Delete resources
kubectl delete -f deployment.yaml
kubectl delete deployment/my-app
kubectl delete pod/my-app-xxx

Troubleshooting

# Events (very useful for debugging)
kubectl get events --sort-by='.lastTimestamp'
kubectl get events -n kube-system

# Resource usage
kubectl top nodes
kubectl top pods

# Describe (detailed info + events)
kubectl describe node/k3s-master
kubectl describe pod/my-app-xxx

# Port forwarding (access pod locally)
kubectl port-forward pod/my-app 8080:80
# Now access: http://localhost:8080

Kubeconfig File Permissions

Always set proper permissions:

# Set restrictive permissions (owner read/write only)
chmod 600 ~/.kube/config

# Verify
ls -la ~/.kube/config
# Should show: -rw------- (600)

Why? - Prevents other users on the system from reading credentials - Standard security practice - kubectl will warn if permissions are too open


Summary

Question Answer
What is kubectl? Command-line tool for managing Kubernetes clusters
Where is kubectl installed? Proxmox host (/usr/local/bin/kubectl), k3s nodes (symlink to k3s)
What's needed to manage cluster? kubectl binary + kubeconfig file + network access to master:6443
Where is kubeconfig? Master: /etc/rancher/k3s/k3s.yaml, Everywhere else: ~/.kube/config
Why two locations? k3s generates in /etc/rancher/, kubectl standard is ~/.kube/config
Can I manage from laptop? Yes! Copy kubeconfig + install kubectl
Is kubeconfig sensitive? YES! Full admin access - protect like root password

Next Steps

Now that you understand kubectl and kubeconfig, proceed to: - Core Infrastructure - Deploy MetalLB, Longhorn, cert-manager - Commands Reference - Quick command reference


Additional Resources