Skip to content

GitLab Self-Hosted Evaluation

Comprehensive evaluation of introducing GitLab Community Edition to Tower Fleet infrastructure for DevOps workflows.

Evaluation Date: 2025-11-22 Status: Not Recommended Recommendation: Use GitHub + GitHub Actions + Flux (per existing roadmap)


Executive Summary

After thorough analysis of GitLab's capabilities, infrastructure requirements, and alignment with Tower Fleet's needs, we do not recommend introducing GitLab at this time. While GitLab offers compelling all-in-one DevOps features, the cost-benefit analysis doesn't justify the resource overhead, complexity, and disaster recovery implications for our homelab environment.

Key reasons: - High resource requirements (8-16GB RAM) on constrained cluster - Disaster recovery complexity vs GitHub's offsite redundancy - Duplication with planned GitHub Actions + Flux implementation - Significant maintenance overhead for minimal practical gain


Current State Analysis

Existing Git & DevOps Workflow

Version Control: - All active projects hosted on GitHub (tower-fleet, money-tracker, subtitleai, home-portal, trip-planner) - Simple workflow: commit frequently, push to main branch - Clean commit conventions: type: description format - GitHub provides critical offsite backup for infrastructure-as-code

Deployment Process: - Current approach: Manual deployments via bash scripts (/root/tower-fleet/scripts/deploy-*.sh) - Workflow: Pull git → update env → build image → tag version → push to k3s registry → kubectl update → health checks - Image versioning: Auto-incremented semver tags (v1.0.2 → v1.0.3) - No CI/CD pipeline yet - roadmap includes "GitHub Actions → ArgoCD/Flux" (medium priority)

Disaster Recovery: - OPNsense: Automated weekly encrypted backups to /vault + tower-fleet repo - Kubeconfig backups to secure locations - Git as source of truth for all configurations - GitHub is critical offsite backup for complete infrastructure recovery


GitLab Feature Comparison

What GitLab Offers

Integrated Platform: - Git repository hosting - Built-in CI/CD pipelines - Issue tracking, project management, wikis - Container registry - Kubernetes integration via GitLab Agent - GitOps support (Flux + agentk recommended) - Self-hosted Community Edition (free and open source)

Kubernetes Features: - Built-in container registry with vulnerability scanning - Direct kubectl integration in CI/CD pipelines - Kubernetes cluster management UI - GitOps workflows with GitLab Agent - Cluster observability and metrics

What We Already Have

Feature GitLab Self-Hosted Current Tower Fleet Stack
Git hosting ✅ GitLab CE ✅ GitHub (free, offsite)
Container registry ✅ Built-in ✅ k3s registry (10.89.97.201:30500)
CI/CD pipelines ✅ GitLab CI ⚪ Planned: GitHub Actions
GitOps ✅ Agent + Flux ⚪ Planned: ArgoCD/Flux
Kubernetes integration ✅ Built-in ✅ kubectl + deployment scripts
Issue tracking ✅ Built-in ❌ Not using
Offsite backup ❌ Self-hosted only ✅ GitHub (critical!)
Resource overhead ❌ High (8-16GB RAM) ✅ Minimal

Detailed Pros & Cons Analysis

PROS of Adding GitLab

1. All-in-One DevOps Platform - Single integrated tool for git, CI/CD, registry, issues - Reduces tool sprawl - Unified authentication and permissions

2. Native Kubernetes Integration - GitLab Agent simplifies cluster access and management - Deploy directly from pipelines without managing kubeconfig externally - Built-in cluster observability in GitLab UI - Automatic environment tracking

3. Self-Hosted Control - Complete data ownership and privacy - No reliance on external services for critical DevOps workflows - Customize to exact infrastructure needs - No vendor lock-in

4. Built-in Container Registry - Could replace k3s registry - Fully integrated with CI/CD pipelines - Vulnerability scanning (Premium tier) - Image lifecycle management

5. GitOps Workflow Support - Supports Flux + agentk pattern - Automatic drift remediation - Aligns with infrastructure-as-code goals

6. Advanced CI/CD Features - Parallel job execution - Matrix builds - Built-in caching - Extensive pipeline syntax - Visual pipeline editor

CONS of Adding GitLab

1. Resource Requirements (CRITICAL) - Minimum: 4GB RAM, 4 CPU cores - Recommended: 8GB RAM for small teams/CI workloads - Reality: CI/CD workloads can spike to 16GB RAM during builds - Context: Currently running 23 LXCs + 5 VMs on limited resources - Impact: Would require dedicated VM with 8-16GB RAM (25-50% of total cluster capacity)

2. Complexity & Maintenance Overhead - GitLab requires multiple components: PostgreSQL, Redis, Sidekiq, Gitaly, NGINX - Complex multi-step upgrade paths (cannot skip versions) - Backup/restore more involved than simple git repositories - Current workflow is simple and functional - adding complexity without clear ROI

3. Disaster Recovery Concerns (MAJOR) - Self-hosted only = no offsite backup unless replicated to external storage - GitHub provides critical offsite redundancy for infrastructure recovery - GitLab backup requires: - PostgreSQL database dumps - Repository storage backup - Configuration files - Secrets and CI/CD variables - Registry images - RTO increase: Complete rebuild takes significantly longer vs GitHub (simple clone and deploy) - RPO concerns: Backup frequency and verification overhead

4. GitHub Already Planned for CI/CD - Roadmap includes "CI/CD Pipeline (GitHub Actions → ArgoCD/Flux)" (medium priority) - GitHub Actions advantages: - Free for public repos - 2000 minutes/month free for private repos - No infrastructure overhead (runs on GitHub's compute) - Integrates seamlessly with existing GitHub repositories - Mature ecosystem of actions and integrations

5. Duplication, Not Replacement - Stated goal: "GitHub will be used for offsite redundancy" - This means maintaining TWO systems: - GitLab as primary DevOps platform - GitHub as offsite backup repository - Push/sync overhead: All changes must flow to both systems - Complexity: Two sources of truth to keep synchronized - Risk: Sync failures leading to data divergence

6. Container Registry Duplication - Existing k3s registry (10.89.97.201:30500) works perfectly - GitLab registry adds: - Additional storage consumption for duplicate images - Another service to maintain and monitor - Minimal practical benefit over current solution - Single point of failure (deployments require GitLab to be running)

7. Limited Native GitOps Support - GitLab lacks robust native GitOps implementation (per GitLab documentation) - Requires third-party tools (Flux or ArgoCD) anyway - Would still need Flux - so GitLab adds layer without replacing anything

8. Infrastructure Provisioning Requirements

Would need to provision: - New VM: VM 204 with 8GB RAM minimum, 4 CPU, 100GB disk - Install GitLab omnibus: PostgreSQL, Redis, Gitaly, NGINX, Sidekiq - Configure backups: New scripts for GitLab-specific backup procedures - Set up GitLab Agent: Deploy to k3s cluster for integration - Configure GitLab Runners: For CI/CD job execution - Network configuration: LoadBalancer IP or Ingress setup - Monitoring integration: Add to Prometheus/Grafana observability stack


This approach aligns with the existing roadmap and provides superior ROI for our homelab environment.

Proposed Architecture

GitHub (source of truth, offsite backup)
GitHub Actions (CI: build, test, security scans)
k3s Registry (10.89.97.201:30500)
Flux (GitOps: watches GitHub, auto-deploys to cluster)
Kubernetes Cluster (3-node k3s)

Benefits Over GitLab

1. Zero Additional Infrastructure - GitHub Actions runs on GitHub's infrastructure (free tier: 2000 minutes/month) - No VM or LXC needed for CI/CD runners - Saves 8-16GB RAM on constrained cluster - No additional storage overhead

2. Offsite by Default - GitHub is already the offsite backup solution - No sync/replication complexity - Disaster recovery: git clone tower-fleet && flux bootstrap - Simple, proven recovery path

3. Simpler Workflow

Example GitHub Actions workflow:

# .github/workflows/deploy-home-portal.yml
name: Deploy Home Portal
on:
  push:
    branches: [main]
    paths:
      - 'apps/home-portal/**'

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      - name: Build and push to k3s registry
        run: |
          docker build -t 10.89.97.201:30500/home-portal:${{ github.sha }} .
          docker tag 10.89.97.201:30500/home-portal:${{ github.sha }} 10.89.97.201:30500/home-portal:latest
          docker push 10.89.97.201:30500/home-portal:${{ github.sha }}
          docker push 10.89.97.201:30500/home-portal:latest

      - name: Update manifest
        run: |
          sed -i "s|image:.*|image: 10.89.97.201:30500/home-portal:${{ github.sha }}|" k8s/deployment.yaml
          git config user.name github-actions
          git config user.email github-actions@github.com
          git add k8s/deployment.yaml
          git commit -m "chore: update home-portal image to ${{ github.sha }}"
          git push

Flux automatically detects manifest change and deploys to cluster.

4. Flux Handles GitOps - Watches tower-fleet GitHub repository for changes - Automatically deploys updated manifests to cluster - Drift detection and auto-remediation - Current deploy scripts evolve to: build → push → commit manifest → Flux deploys

5. Perfect Roadmap Alignment - Roadmap already includes: "CI/CD Pipeline (GitHub Actions → ArgoCD/Flux)" (medium priority) - GitLab would be a detour, not progress toward stated goals - Implementation builds on existing GitHub investment

6. Cost Comparison

Aspect GitLab Self-Hosted GitHub + Actions + Flux
Infrastructure VM: 8GB RAM, 4 CPU, 100GB disk None (GitHub hosted CI/CD)
Storage overhead 50-100GB (repos + registry + DB) Minimal (Flux ~100MB)
Maintenance burden High (GitLab updates, backups) Low (Flux updates only)
Backup complexity High (DB + repos + config + secrets) Low (git clone)
Offsite redundancy Must configure separately Built-in (GitHub)
Monthly electricity ~$5-10 (24/7 VM) $0 (Flux negligible)
Setup time 8-16 hours 2-4 hours
Monthly maintenance 2-4 hours <1 hour

Disaster Recovery Implications

Current Strategy (GitHub-based)

  • RTO (Recovery Time Objective): 30-45 minutes (rebuild cluster from scratch)
  • RPO (Recovery Point Objective): Last git push (typically <1 hour)
  • Backup method: Automated via git push
  • Restore procedure: git clone tower-fleet && kubectl apply -f k8s/
  • Offsite location: GitHub (geographically distributed, highly available)
  • Simplicity: Single command recovery

With GitLab Primary (Dual-sync scenario)

  • RTO: 60+ minutes (restore GitLab VM first, verify data, then deploy cluster)
  • RPO: Last GitLab backup cycle (could be hours if not properly automated)
  • Backup method: Complex multi-step process:
  • GitLab omnibus backup (PostgreSQL dump + repository archives)
  • Configuration files backup
  • Secrets and CI/CD variables export
  • Container registry images backup
  • Sync to GitHub (requires automation)
  • Restore procedure: Multi-step:
  • Provision new GitLab VM
  • Install GitLab omnibus
  • Restore database and repositories
  • Restore configuration
  • Verify GitLab functionality
  • Deploy cluster from GitLab
  • Offsite location: Requires manual sync to GitHub (automation complexity)
  • Risk: Sync failures create data divergence between GitLab and GitHub

Verdict: GitLab significantly worsens disaster recovery posture with increased complexity and longer recovery times.


Use Cases Where GitLab WOULD Make Sense

GitLab self-hosted would be compelling if Tower Fleet had:

  1. Enterprise-scale resources: 100+ GB RAM across cluster (not 24GB)
  2. Team collaboration: Multiple contributors requiring built-in MR workflow, issue tracking, wiki
  3. GitHub Actions constraints: Exceeding 2000 minutes/month free tier
  4. Compliance requirements: Strict data sovereignty mandates
  5. Air-gapped environment: No internet access, cannot use GitHub
  6. Heavy CI/CD workloads: Building complex multi-stage pipelines with matrix builds
  7. Advanced security scanning: Need for SAST/DAST tooling (Premium/Ultimate features)

Our Reality: None of these conditions apply. Tower Fleet is: - Solo operator with simple workflows - Limited cluster resources (24GB total RAM) - Minimal CI/CD needs (currently manual, works well) - Nowhere near GitHub Actions free tier limits - Requires offsite backup (not air-gapped)


Specific Considerations for Tower Fleet

1. Kubernetes GitOps (Roadmap Goal)

GitLab approach:

GitLab Instance (8GB RAM) → GitLab Agent → Flux → Kubernetes
- Components: GitLab omnibus + GitLab Agent + Flux - Complexity: High (three systems to maintain) - Resource cost: 8GB RAM for GitLab + 100MB for Agent + 100MB for Flux - Total overhead: ~8.2GB RAM

GitHub approach:

GitHub (hosted) → Flux → Kubernetes
- Components: Flux only (GitHub is external) - Complexity: Low (one system to maintain) - Resource cost: ~100MB RAM for Flux - Total overhead: ~100MB RAM

Winner: GitHub (80x more resource efficient, simpler architecture)

2. Container Registry Strategy

Current k3s registry (10.89.97.201:30500): - Works perfectly for current needs - NodePort service on k3s master node - No authentication needed (internal network only) - Zero maintenance overhead - Minimal resource usage

GitLab registry alternative: - Adds storage overhead (duplicate images in GitLab) - Requires GitLab VM to be running for deployments (single point of failure) - Authentication and permission complexity - Additional monitoring and maintenance - No meaningful benefit over current registry for our scale

Winner: Keep k3s registry (simple, proven, no overhead)

3. CI/CD Pipeline Comparison

GitLab CI:

# .gitlab-ci.yml
stages:
  - build
  - deploy

build:
  stage: build
  script:
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
  tags:
    - docker
- Runs on self-hosted GitLab Runners - Resource cost: Runners consume RAM/CPU during builds - Requires runner provisioning and maintenance - Build artifacts stored on GitLab VM

GitHub Actions:

# .github/workflows/deploy.yml
jobs:
  build:
    runs-on: ubuntu-latest  # Runs on GitHub's infrastructure
    steps:
      - uses: actions/checkout@v3
      - name: Build and push
        run: |
          docker build -t 10.89.97.201:30500/app:${{ github.sha }} .
          docker push 10.89.97.201:30500/app:${{ github.sha }}
- Runs on GitHub's infrastructure (free tier: 2000 min/month) - Resource cost: Zero on our infrastructure - No runner maintenance - Mature action ecosystem

Winner: GitHub Actions (free compute, zero infrastructure overhead)


Infrastructure Cost Analysis

GitLab Self-Hosted Total Cost of Ownership

Initial Setup: - Create VM 204: 8GB RAM, 4 CPU cores, 100GB disk - Install GitLab omnibus package (PostgreSQL, Redis, Gitaly, NGINX, Sidekiq) - Configure automated backups to /vault/k8s-backups/ - Set up GitLab Agent on k3s cluster - Configure GitLab Runner for CI/CD job execution - Integrate with Prometheus/Grafana monitoring - Configure Ingress or LoadBalancer for external access - Time investment: 8-16 hours initial setup

Ongoing Operational Costs: - Resource consumption: 8GB RAM (33% of total cluster), 4 CPU cores, 100GB storage - Monthly maintenance: GitLab updates, backup verification, runner maintenance, monitoring - Electricity cost: ~$5-10/month (24/7 VM at ~50W additional load) - Backup storage: Additional 50-100GB for GitLab backup archives - Time investment: 2-4 hours/month maintenance - Annual cost: $60-120 electricity + ~30-50 hours labor

GitHub + Flux Total Cost of Ownership

Initial Setup: - Install Flux on cluster: flux bootstrap github - Configure GitHub Actions workflows for each app - Set up repository structure for GitOps - Time investment: 2-4 hours total

Ongoing Operational Costs: - Resource consumption: ~100MB RAM (Flux controllers), negligible CPU - Monthly maintenance: Minimal (Flux auto-updates, GitHub Actions just works) - Electricity cost: $0 (Flux overhead negligible) - Storage overhead: Minimal (Flux controllers ~100MB) - Time investment: <1 hour/month (reviewing Actions runs, updating workflows as needed) - Annual cost: $0 electricity + ~10 hours labor

Cost Difference: GitLab costs ~$60-120/year in electricity alone, plus 3-5x more labor hours.


Migration Path (If Requirements Change)

If future needs justify GitLab, here's a low-risk migration approach:

Hybrid Approach: GitHub + GitLab CI/CD

  1. Keep GitHub as source of truth (primary repository hosting)
  2. Set up GitLab repository mirroring (one-way: GitHub → GitLab)
  3. Use GitLab CI/CD only (leverage pipelines while repos stay on GitHub)
  4. Maintain GitHub for disaster recovery (offsite backup preserved)

Benefits of this approach: - Experiment with GitLab CI/CD features - Keep GitHub's offsite redundancy - Avoid dual-sync complexity - Easy rollback to GitHub-only if GitLab doesn't add value

Setup:

# In GitLab: Settings → Repository → Mirroring repositories
# Configure pull mirror from GitHub
# Trigger: Poll interval or webhook

This lets us test GitLab incrementally without committing fully.


Decision Matrix

Criterion Weight GitLab Self-Hosted GitHub + Actions + Flux Winner
Resource efficiency High 2/10 (8-16GB RAM) 10/10 (~100MB RAM) GitHub
Maintenance burden High 3/10 (complex updates) 9/10 (minimal) GitHub
Disaster recovery Critical 4/10 (complex multi-step) 10/10 (git clone) GitHub
Offsite backup Critical 3/10 (requires separate setup) 10/10 (built-in) GitHub
CI/CD compute cost Medium 4/10 (self-hosted runners) 10/10 (free GitHub-hosted) GitHub
GitOps support Medium 6/10 (Agent + Flux) 9/10 (Flux native) GitHub
Roadmap alignment High 2/10 (detour from plan) 10/10 (already planned) GitHub
Time to implement Medium 3/10 (8-16 hours) 9/10 (2-4 hours) GitHub
Monthly cost Medium 4/10 ($5-10 + time) 10/10 ($0) GitHub
Learning curve Low 5/10 (steep) 7/10 (moderate) GitHub
Future flexibility Medium 6/10 (GitLab lock-in) 9/10 (can add runners later) GitHub

Weighted Score: - GitLab: 3.8/10 - GitHub + Flux: 9.5/10

Clear Winner: GitHub + GitHub Actions + Flux


Final Recommendation

Short Answer

Do not introduce GitLab to Tower Fleet infrastructure. Continue with GitHub + implement GitHub Actions + Flux per the existing roadmap.

Rationale

  1. Resource constraints are prohibitive
  2. Cluster has 24GB total RAM across 3 nodes
  3. GitLab requires 8-16GB (33-67% of cluster capacity)
  4. Cannot justify dedicating this much to DevOps tooling

  5. Disaster recovery is critically important

  6. GitHub's offsite hosting is foundational to DR strategy
  7. GitLab complicates recovery with multi-step restoration
  8. Dual-sync to GitHub for backup adds significant complexity and risk

  9. Complexity without commensurate value

  10. GitLab maintenance overhead (updates, backups, monitoring) is high
  11. Current simple workflow (git + deploy scripts) works well
  12. GitHub Actions provides 90% of value with 10% of complexity

  13. Unnecessary duplication

  14. Would maintain two systems (GitLab primary + GitHub backup)
  15. Sync overhead and drift risks
  16. K3s registry already handles container images perfectly

  17. Perfect alignment with roadmap

  18. "CI/CD Pipeline (GitHub Actions → ArgoCD/Flux)" already planned (medium priority)
  19. Implementing GitLab would be a detour from stated goals
  20. GitHub approach is superior for our specific context

What to Do Instead

Phase 1: Immediate (Next Quarter)

  1. Implement GitHub Actions for CI/CD
  2. Create workflow files for each application
  3. Build and test on GitHub's free runners
  4. Push images to existing k3s registry
  5. Add security scanning (Trivy for vulnerabilities)
  6. Integrate linting and type checking
  7. Deliverable: .github/workflows/deploy-{app}.yml for all apps

  8. Deploy Flux for GitOps

  9. Bootstrap Flux on cluster: flux bootstrap github
  10. Configure Flux to watch tower-fleet repository
  11. Auto-deploy manifests from k8s/ directory
  12. Enable drift detection and auto-remediation
  13. Deliverable: Full GitOps workflow operational

Phase 2: Medium Term (2-3 Months)

  1. Enhance deployment automation
  2. Keep deploy scripts for manual deployments (troubleshooting)
  3. Add automated testing to GitHub Actions (unit + integration)
  4. Implement preview environments for PRs
  5. Add deployment notifications (Slack/Discord webhook)
  6. Deliverable: Fully automated CI/CD pipeline

  7. Optimize workflows

  8. Add caching for faster builds
  9. Implement matrix builds for multi-arch (if needed)
  10. Set up scheduled workflows (weekly security scans)
  11. Deliverable: Fast, efficient CI/CD pipeline

Phase 3: Long Term (6+ Months)

  1. Self-hosted runners if needed (unlikely, but possible)
  2. If GitHub Actions free tier becomes constraining
  3. Deploy lightweight self-hosted runner in LXC (~2GB RAM)
  4. Still uses GitHub, just runs builds locally
  5. Comparison: 2GB RAM vs GitLab's 8-16GB requirement
  6. Deliverable: Expanded CI/CD capacity if needed

Edge Cases for Reconsideration

Only reconsider GitLab if Tower Fleet experiences:

  • Massive resource expansion: Add 32+ GB RAM to cluster (can spare 8-16GB)
  • Team growth: Hire 3+ collaborators needing built-in collaboration features
  • GitHub Actions limits: Exceed 2000 min/month free tier consistently
  • Compliance shift: Develop air-gapped or strict data sovereignty requirements
  • Advanced pipeline needs: Require complex GitLab-specific CI/CD features

Likelihood for homelab: Very low (<5% probability)


Implementation Checklist (GitHub + Flux Approach)

When ready to implement the recommended approach:

Pre-work: - [ ] Review Production Deployment guide - [ ] Install Flux CLI on Proxmox host: curl -s https://fluxcd.io/install.sh | sudo bash - [ ] Create GitHub personal access token with repo permissions

Phase 1: Flux Bootstrap - [ ] Bootstrap Flux: flux bootstrap github --owner=jakecelentano --repository=tower-fleet --path=k8s/flux-system - [ ] Verify Flux controllers: kubectl get pods -n flux-system - [ ] Configure Flux to watch k8s manifests in tower-fleet repo

Phase 2: GitHub Actions Setup - [ ] Create .github/workflows/ directory in each app repository - [ ] Create deployment workflow for home-portal - [ ] Create deployment workflow for money-tracker - [ ] Create deployment workflow for trip-planner - [ ] Create deployment workflow for subtitleai - [ ] Test workflows with manual trigger - [ ] Verify images pushed to k3s registry

Phase 3: Integration Testing - [ ] Make manifest change and push to GitHub - [ ] Verify Flux detects change and deploys - [ ] Make code change and push to GitHub - [ ] Verify GitHub Actions builds and pushes image - [ ] Update manifest with new image tag - [ ] Verify Flux deploys updated image

Phase 4: Documentation & Training - [ ] Update production deployment documentation - [ ] Document new CI/CD workflow - [ ] Create runbook for troubleshooting Actions/Flux - [ ] Archive old deploy scripts (keep for reference)


Conclusion

GitLab is an excellent DevOps platform for teams with sufficient resources and complex collaboration needs. However, for Tower Fleet's homelab context—limited cluster resources, solo operator, simple workflows, and critical need for offsite backup—the GitHub + GitHub Actions + Flux approach is clearly superior.

This evaluation preserves the option to introduce GitLab in the future if requirements change dramatically, while recommending the most pragmatic path forward: leverage existing GitHub investment and implement lightweight, proven GitOps workflows.

Next Steps: 1. Mark this evaluation as complete 2. Proceed with GitHub Actions + Flux implementation per roadmap 3. Revisit if infrastructure scales 3-4x current capacity


References & Sources

GitLab Documentation: - GitLab Kubernetes Integration - GitLab CI/CD with Kubernetes - GitLab GitOps Workflows - GitLab Installation Requirements - GitLab Backup and Restore - GitLab Disaster Recovery

Comparison Resources: - Self-Hosted GitLab Complete Guide - GitLab vs GitHub for Homelabs - GitLab vs GitHub 2025 Comparison - Self-Hosted Git Alternatives (HN Discussion)

Tower Fleet Documentation: - Production App Deployment Guide - Disaster Recovery Procedures - Infrastructure Roadmap - Kubernetes Overview


Evaluation Completed: 2025-11-22 Decision: Do not implement GitLab Recommended Path: GitHub + GitHub Actions + Flux Review Date: 2026-Q2 (if requirements change)