Skip to content

Supabase JWT Key Synchronization

Critical: Kong API gateway must have the same JWT keys as Supabase secrets, or all authentication will fail.


The Problem

Kong (Supabase's API gateway) maintains its own list of valid API keys as "consumers". If these keys don't match the actual JWT tokens stored in supabase-secrets, Kong will reject all authentication requests with a 401 Unauthorized before they ever reach GoTrue (the auth service).

Symptoms: - Login fails with "Invalid authentication credentials" - 401 errors in Kong logs - No corresponding logs in GoTrue (requests never reach it) - Works in Supabase Studio but not in apps


Root Cause

Kong's configuration (kong-config ConfigMap) contains hardcoded JWT keys:

consumers:
  - username: anon
    keyauth_credentials:
      - key: eyJhbGc...  # Must match supabase-secrets ANON_KEY
  - username: service_role
    keyauth_credentials:
      - key: eyJhbGc...  # Must match supabase-secrets SERVICE_ROLE_KEY

If these keys become out of sync with supabase-secrets, authentication breaks.

Common causes: 1. JWT tokens regenerated but Kong config not updated 2. JWT_SECRET changed but keys not regenerated 3. Manual updates to supabase-secrets without updating Kong 4. Different keys used during initial setup


Verification Script

Run this to check if keys are in sync:

#!/bin/bash
# Check JWT key synchronization

echo "=== Checking Supabase JWT Key Sync ==="

# Get keys from Supabase secrets
ANON_KEY_SECRET=$(kubectl get secret supabase-secrets -n supabase -o jsonpath='{.data.ANON_KEY}' | base64 -d)
SERVICE_KEY_SECRET=$(kubectl get secret supabase-secrets -n supabase -o jsonpath='{.data.SERVICE_ROLE_KEY}' | base64 -d)

# Get keys from Kong config
ANON_KEY_KONG=$(kubectl get configmap kong-config -n supabase -o yaml | grep -A 1 "username: anon" | grep "key:" | awk '{print $3}')
SERVICE_KEY_KONG=$(kubectl get configmap kong-config -n supabase -o yaml | grep -A 1 "username: service_role" | grep "key:" | awk '{print $3}')

echo ""
echo "ANON_KEY comparison:"
if [ "$ANON_KEY_SECRET" = "$ANON_KEY_KONG" ]; then
    echo "✅ ANON_KEY is in sync"
else
    echo "❌ ANON_KEY MISMATCH!"
    echo "   Secret: ${ANON_KEY_SECRET:0:50}..."
    echo "   Kong:   ${ANON_KEY_KONG:0:50}..."
fi

echo ""
echo "SERVICE_ROLE_KEY comparison:"
if [ "$SERVICE_KEY_SECRET" = "$SERVICE_KEY_KONG" ]; then
    echo "✅ SERVICE_ROLE_KEY is in sync"
else
    echo "❌ SERVICE_ROLE_KEY MISMATCH!"
    echo "   Secret: ${SERVICE_KEY_SECRET:0:50}..."
    echo "   Kong:   ${SERVICE_KEY_KONG:0:50}..."
fi

if [ "$ANON_KEY_SECRET" != "$ANON_KEY_KONG" ] || [ "$SERVICE_KEY_SECRET" != "$SERVICE_KEY_KONG" ]; then
    echo ""
    echo "⚠️  KEYS ARE OUT OF SYNC - Authentication will fail!"
    echo "   Run: /root/tower-fleet/scripts/fix-supabase-keys.sh"
    exit 1
fi

echo ""
echo "✅ All keys are in sync!"

Save as /root/tower-fleet/scripts/check-supabase-keys.sh and run after any Supabase changes.


Fix Script

If keys are out of sync, run this to fix:

#!/bin/bash
# Sync Kong configuration with Supabase secrets

set -e

echo "=== Syncing Supabase JWT Keys ==="

# Backup current Kong config
echo "Creating backup..."
kubectl get configmap kong-config -n supabase -o yaml > /tmp/kong-config-backup-$(date +%Y%m%d-%H%M%S).yaml
echo "✅ Backup saved to /tmp/kong-config-backup-*.yaml"

# Get current keys from secrets
echo ""
echo "Reading keys from supabase-secrets..."
ANON_KEY=$(kubectl get secret supabase-secrets -n supabase -o jsonpath='{.data.ANON_KEY}' | base64 -d)
SERVICE_KEY=$(kubectl get secret supabase-secrets -n supabase -o jsonpath='{.data.SERVICE_ROLE_KEY}' | base64 -d)

if [ -z "$ANON_KEY" ] || [ -z "$SERVICE_KEY" ]; then
    echo "❌ Failed to read keys from supabase-secrets!"
    exit 1
fi

echo "✅ Keys retrieved"

# Get current Kong config
echo ""
echo "Updating Kong configuration..."
KONG_CONFIG=$(cat <<YAML
_format_version: "2.1"
_transform: true

services:
  - name: auth-v1-open
    url: http://gotrue:9999/verify
    routes:
      - name: auth-v1-open
        strip_path: true
        paths:
          - /auth/v1/verify
    plugins:
      - name: cors

  - name: auth-v1-open-callback
    url: http://gotrue:9999/callback
    routes:
      - name: auth-v1-open-callback
        strip_path: true
        paths:
          - /auth/v1/callback
    plugins:
      - name: cors

  - name: auth-v1-open-authorize
    url: http://gotrue:9999/authorize
    routes:
      - name: auth-v1-open-authorize
        strip_path: true
        paths:
          - /auth/v1/authorize
    plugins:
      - name: cors

  - name: auth-v1
    _comment: "GoTrue: /auth/v1/* -> http://gotrue:9999/*"
    url: http://gotrue:9999
    routes:
      - name: auth-v1-all
        strip_path: true
        paths:
          - /auth/v1/
    plugins:
      - name: cors
      - name: key-auth
        config:
          hide_credentials: false

  - name: rest-v1
    _comment: "PostgREST: /rest/v1/* -> http://rest:3000/*"
    url: http://rest:3000/
    routes:
      - name: rest-v1-all
        strip_path: true
        paths:
          - /rest/v1/
    plugins:
      - name: cors
      - name: key-auth
        config:
          hide_credentials: true

  - name: graphql-v1
    _comment: "PostgREST: /graphql/v1/* -> http://rest:3000/rpc/graphql"
    url: http://rest:3000/rpc/graphql
    routes:
      - name: graphql-v1-all
        strip_path: true
        paths:
          - /graphql/v1
    plugins:
      - name: cors
      - name: key-auth
        config:
          hide_credentials: true

  - name: storage-v1
    _comment: "Storage: /storage/v1/* -> http://storage:5000/*"
    url: http://storage:5000/
    routes:
      - name: storage-v1-all
        strip_path: true
        paths:
          - /storage/v1/
    plugins:
      - name: cors

  - name: pg-meta
    _comment: "PostgresMeta: /pg/* -> http://postgres-meta:8080/*"
    url: http://postgres-meta:8080/
    routes:
      - name: pg-meta-all
        strip_path: true
        paths:
          - /pg/
    plugins:
      - name: cors
      - name: key-auth
        config:
          hide_credentials: false

consumers:
  - username: anon
    keyauth_credentials:
      - key: $ANON_KEY
  - username: service_role
    keyauth_credentials:
      - key: $SERVICE_KEY

plugins:
  - name: cors
    config:
      origins:
        - "*"
      credentials: true
      exposed_headers:
        - Content-Range
      headers:
        - authorization
        - content-type
        - x-client-info
        - apikey
        - x-upsert
YAML
)

# Update Kong config
kubectl patch configmap kong-config -n supabase --type='json' -p="[{\"op\": \"replace\", \"path\": \"/data/kong.yml\", \"value\": \"$KONG_CONFIG\"}]"

echo "✅ Kong config updated"

# Restart Kong to apply changes
echo ""
echo "Restarting Kong..."
kubectl rollout restart deployment/kong -n supabase
kubectl rollout status deployment/kong -n supabase --timeout=60s

echo ""
echo "✅ Done! JWT keys are now in sync."
echo ""
echo "Verify by running: /root/tower-fleet/scripts/check-supabase-keys.sh"

Save as /root/tower-fleet/scripts/fix-supabase-keys.sh


Prevention

When Deploying New Apps

Always verify JWT keys are in sync:

# 1. Check current keys match
/root/tower-fleet/scripts/check-supabase-keys.sh

# 2. Get the correct ANON_KEY
kubectl get secret supabase-secrets -n supabase -o jsonpath='{.data.ANON_KEY}' | base64 -d

# 3. Use this EXACT key in your app's .env.production
NEXT_PUBLIC_SUPABASE_ANON_KEY=<key-from-step-2>

# 4. Build Docker image with correct keys
# 5. Deploy to Kubernetes

When Regenerating JWT Tokens

If you ever need to regenerate JWT tokens:

# 1. Update supabase-secrets with new keys
kubectl patch secret supabase-secrets -n supabase --type='json' \
  -p='[{"op": "replace", "path": "/data/ANON_KEY", "value": "BASE64_ENCODED_NEW_KEY"}]'

# 2. IMMEDIATELY update Kong config
/root/tower-fleet/scripts/fix-supabase-keys.sh

# 3. Update all app sealed secrets with new keys
# 4. Rebuild and redeploy all apps with new keys

# 5. Restart GoTrue to pick up new JWT_SECRET if changed
kubectl rollout restart deployment/gotrue -n supabase

Troubleshooting

Auth fails with 401, no GoTrue logs

Diagnosis:

# Check if request reaches GoTrue
kubectl logs -n supabase -l app=gotrue --tail=20

# Check Kong logs (you'll see 401s here)
kubectl logs -n supabase -l app=kong --tail=20

If Kong shows 401 but GoTrue shows nothing, keys are mismatched.

Fix:

/root/tower-fleet/scripts/check-supabase-keys.sh
/root/tower-fleet/scripts/fix-supabase-keys.sh

After fixing, app still shows 401

Cause: App Docker image has old keys baked in (for NEXT_PUBLIC_* vars)

Fix:

# 1. Update .env.production with correct key
cd /path/to/app
echo "NEXT_PUBLIC_SUPABASE_ANON_KEY=$(kubectl get secret supabase-secrets -n supabase -o jsonpath='{.data.ANON_KEY}' | base64 -d)" > .env.production

# 2. Rebuild image
docker build --no-cache -t app:new-version .

# 3. Push and restart deployment
docker push registry/app:new-version
kubectl rollout restart deployment/app -n namespace



Last Updated: 2025-11-11
Severity: CRITICAL - Auth completely broken if keys mismatch
Affected: All apps using Supabase authentication