Skip to content

Supabase Storage Integration Guide

This guide covers implementing file uploads and storage using Supabase Storage across all tower-fleet applications.


Table of Contents

  1. Overview
  2. Bucket Creation
  3. RLS Policies
  4. File Upload Implementation
  5. Client-Side Upload
  6. File Retrieval
  7. Database Integration
  8. Real-World Examples
  9. Common Patterns
  10. Troubleshooting

Overview

What is Supabase Storage?

Supabase Storage is an S3-compatible object storage service integrated with Supabase. It provides:

  • Public and private buckets for different access patterns
  • RLS (Row-Level Security) policies for fine-grained access control
  • CDN-backed public URLs for fast delivery
  • Signed URLs for temporary access to private files
  • Built-in image transformations (resize, format conversion)

When to Use Storage

  • User-generated content: Profile pictures, document uploads
  • Media files: Photos, videos, audio
  • Application assets: Custom icons, logos, backgrounds
  • Downloadable content: Reports, exports, receipts

Architecture

Client Upload
Next.js API Route (validation, auth)
Supabase Storage (S3-compatible)
PostgreSQL (file metadata in app tables)

Flow: 1. Client uploads file via API route 2. API validates file (type, size, auth) 3. File stored in Supabase Storage bucket 4. Path stored in application database table 5. Client retrieves via public URL or signed URL

Production External Access

Critical: Browser clients accessing production apps (e.g., portal.bogocat.com) cannot reach internal Supabase URLs like http://10.89.97.214:8000. You must configure a public storage endpoint.

Infrastructure already in place: - K8s Ingress: storage.bogocat.com → Kong (see manifests/supabase/storage-ingress.yaml) - VPS Caddy: storage.bogocat.com → K8s Ingress (see manifests/vps/Caddyfile.production)

App Configuration:

Add to your .env.production:

NEXT_PUBLIC_SUPABASE_STORAGE_URL=https://storage.bogocat.com

Code Pattern:

// In your component that displays uploaded files
const storageUrl = process.env.NEXT_PUBLIC_SUPABASE_STORAGE_URL
  || process.env.NEXT_PUBLIC_SUPABASE_URL

const publicUrl = `${storageUrl}/storage/v1/object/public/${bucket}/${path}`

Why two env vars? - NEXT_PUBLIC_SUPABASE_URL: Used for API calls (internal network OK for server-side) - NEXT_PUBLIC_SUPABASE_STORAGE_URL: Used for browser-loaded assets (must be externally accessible)


Bucket Creation

Creating Buckets via Migrations

Best practice: Create buckets via SQL migrations (version controlled, reproducible)

Basic Pattern:

-- Create bucket
INSERT INTO storage.buckets (id, name, public, file_size_limit, allowed_mime_types)
VALUES (
  'bucket-name',           -- Bucket ID (use app-namespaced naming)
  'bucket-name',           -- Display name
  true,                    -- Public (true) or private (false)
  10485760,                -- Size limit in bytes (10MB example)
  ARRAY['image/png', 'image/jpeg', 'image/svg+xml']::text[]  -- Allowed MIME types
)
ON CONFLICT (id) DO NOTHING;

Public vs Private Buckets

Public Bucket: - Files accessible via public URL without authentication - Use for: Public assets, service icons, shared content - Access pattern: https://{project}.supabase.co/storage/v1/object/public/{bucket}/{path}

Private Bucket: - Requires authentication or signed URL - Use for: User uploads, private documents, sensitive files - Access pattern: Via signed URLs or authenticated requests

Decision Guide:

Scenario Bucket Type Reason
User profile pictures Public Non-sensitive, need fast CDN delivery
Service/app icons Public Shared assets, no privacy concern
User-uploaded videos Private User-specific, control access
Receipt images Private Sensitive financial data
Application logos Public Branding assets
Medical records Private HIPAA/privacy requirements

File Size Limits and MIME Types

File Size Limits:

file_size_limit => 10485760  -- 10MB (10 * 1024 * 1024 bytes)
file_size_limit => 104857600 -- 100MB
file_size_limit => 2147483648 -- 2GB (max practical limit)

Common MIME Types:

-- Images
ARRAY['image/png', 'image/jpeg', 'image/svg+xml', 'image/webp', 'image/gif']::text[]

-- Videos
ARRAY['video/mp4', 'video/webm', 'video/ogg', 'video/quicktime', 'video/x-matroska']::text[]

-- Documents
ARRAY['application/pdf', 'text/plain', 'text/csv']::text[]

-- Subtitles
ARRAY['text/plain', 'text/vtt', 'application/x-subrip']::text[]

Bucket Naming Convention

Pattern: {app-name}-{purpose}

Examples: - home-portal-service-icons - money-tracker-receipts - subtitleai-uploads - subtitleai-outputs

Why namespaced? Shared Kubernetes Supabase instance uses schema-based isolation for databases but global bucket names.


RLS Policies

Understanding storage.objects

Storage files are tracked in the storage.objects table with RLS policies controlling access.

Key fields: - bucket_id - Which bucket the file belongs to - name - File path (can include folders: user123/file.png) - owner - User who uploaded (nullable)

Helper function: - storage.foldername(name) - Extracts folder path as array - Example: storage.foldername('user-id/subfolder/file.png'){user-id, subfolder}

Public Bucket Policies

Pattern: Anyone can view, authenticated can upload

-- Public read access
CREATE POLICY "public_read_{bucket_name}"
  ON storage.objects
  FOR SELECT
  TO anon, authenticated
  USING (bucket_id = '{bucket-name}');

-- Authenticated users can upload
CREATE POLICY "authenticated_upload_{bucket_name}"
  ON storage.objects
  FOR INSERT
  TO authenticated
  WITH CHECK (bucket_id = '{bucket-name}');

-- Users can delete their uploads
CREATE POLICY "users_delete_{bucket_name}"
  ON storage.objects
  FOR DELETE
  TO authenticated
  USING (bucket_id = '{bucket-name}');

-- Users can update their uploads
CREATE POLICY "users_update_{bucket_name}"
  ON storage.objects
  FOR UPDATE
  TO authenticated
  USING (bucket_id = '{bucket-name}');

Use case: Service icons, public photos, shared assets

Private Bucket with User Folders

Pattern: Users can only access their own folder

-- Users can upload to their own folder: {user_id}/filename
CREATE POLICY "users_upload_own_folder_{bucket_name}"
  ON storage.objects
  FOR INSERT
  TO authenticated
  WITH CHECK (
    bucket_id = '{bucket-name}' AND
    (storage.foldername(name))[1] = auth.uid()::text
  );

-- Users can view their own files
CREATE POLICY "users_view_own_files_{bucket_name}"
  ON storage.objects
  FOR SELECT
  TO authenticated
  USING (
    bucket_id = '{bucket-name}' AND
    (storage.foldername(name))[1] = auth.uid()::text
  );

-- Users can delete their own files
CREATE POLICY "users_delete_own_files_{bucket_name}"
  ON storage.objects
  FOR DELETE
  TO authenticated
  USING (
    bucket_id = '{bucket-name}' AND
    (storage.foldername(name))[1] = auth.uid()::text
  );

-- Users can update their own files
CREATE POLICY "users_update_own_files_{bucket_name}"
  ON storage.objects
  FOR UPDATE
  TO authenticated
  USING (
    bucket_id = '{bucket-name}' AND
    (storage.foldername(name))[1] = auth.uid()::text
  );

Use case: User video uploads, private documents, receipts

Path structure: {user-uuid}/filename.ext (first folder = user ID)

Service Role Policies

Pattern: Background workers/services can write, users can read

-- Service role (backend workers) can insert
CREATE POLICY "service_insert_{bucket_name}"
  ON storage.objects
  FOR INSERT
  TO service_role
  WITH CHECK (bucket_id = '{bucket-name}');

-- Users can view outputs
CREATE POLICY "users_view_outputs_{bucket_name}"
  ON storage.objects
  FOR SELECT
  TO authenticated
  USING (
    bucket_id = '{bucket-name}' AND
    (storage.foldername(name))[1] = auth.uid()::text
  );

Use case: Processed outputs (transcoded videos, generated subtitles, reports)


File Upload Implementation

API Route Pattern

File: src/app/api/upload/route.ts (or namespaced: /api/services/icons/route.ts)

import { NextResponse } from 'next/server'
import { createClient } from '@/lib/supabase/server'

export async function POST(request: Request) {
  try {
    const supabase = await createClient()

    // Step 1: Verify authentication
    const { data: { user }, error: authError } = await supabase.auth.getUser()
    if (authError || !user) {
      return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
    }

    // Step 2: Parse multipart form data
    const formData = await request.formData()
    const file = formData.get('file') as File | null

    if (!file) {
      return NextResponse.json({ error: 'No file provided' }, { status: 400 })
    }

    // Step 3: Validate file type
    const validTypes = ['image/png', 'image/jpeg', 'image/svg+xml', 'image/webp']
    if (!validTypes.includes(file.type)) {
      return NextResponse.json(
        { error: `Invalid file type. Allowed: ${validTypes.join(', ')}` },
        { status: 400 }
      )
    }

    // Step 4: Validate file size
    const maxSize = 10 * 1024 * 1024 // 10MB
    if (file.size > maxSize) {
      return NextResponse.json(
        { error: `File too large. Maximum size: ${maxSize / (1024 * 1024)}MB` },
        { status: 400 }
      )
    }

    // Step 5: Generate unique filename
    const timestamp = Date.now()
    const sanitizedName = file.name.replace(/[^a-zA-Z0-9._-]/g, '_')
    const storagePath = `${user.id}/${timestamp}_${sanitizedName}`

    // Step 6: Convert File to ArrayBuffer
    const arrayBuffer = await file.arrayBuffer()

    // Step 7: Upload to Supabase Storage
    const { data, error: uploadError } = await supabase.storage
      .from('bucket-name')
      .upload(storagePath, arrayBuffer, {
        contentType: file.type,
        cacheControl: '3600',  // Cache for 1 hour
        upsert: false,         // Don't overwrite existing files
      })

    if (uploadError) {
      console.error('Upload error:', uploadError)
      return NextResponse.json(
        { error: `Upload failed: ${uploadError.message}` },
        { status: 500 }
      )
    }

    // Step 8: Return storage path
    return NextResponse.json({
      success: true,
      path: data.path,
      fullPath: `bucket-name/${data.path}`,
      publicUrl: `${process.env.NEXT_PUBLIC_SUPABASE_URL}/storage/v1/object/public/bucket-name/${data.path}`,
    })
  } catch (error) {
    console.error('Unexpected error:', error)
    return NextResponse.json(
      { error: 'Internal server error' },
      { status: 500 }
    )
  }
}

Key Implementation Details

1. Authentication First Always verify user authentication before processing uploads. Return 401 if unauthorized.

2. File Validation - Check MIME type against allowed list - Check file size against limit - Sanitize filename to prevent path traversal

3. Unique Filenames Use {timestamp}_{sanitized-name} pattern to prevent collisions.

4. User Folder Structure For private buckets, prefix path with user ID: ${user.id}/filename

5. ArrayBuffer Conversion Supabase upload requires ArrayBuffer, not File object:

const arrayBuffer = await file.arrayBuffer()

6. Upload Options - contentType: Set explicitly for proper browser handling - cacheControl: Control CDN caching (3600 = 1 hour) - upsert: false prevents accidental overwrites


Client-Side Upload

Basic File Input

'use client'

import { useState } from 'react'

export function FileUpload() {
  const [uploading, setUploading] = useState(false)
  const [error, setError] = useState<string | null>(null)
  const [uploadedUrl, setUploadedUrl] = useState<string | null>(null)

  async function handleUpload(e: React.ChangeEvent<HTMLInputElement>) {
    const file = e.target.files?.[0]
    if (!file) return

    setUploading(true)
    setError(null)

    try {
      const formData = new FormData()
      formData.append('file', file)

      const response = await fetch('/api/upload', {
        method: 'POST',
        body: formData,
      })

      if (!response.ok) {
        const data = await response.json()
        throw new Error(data.error || 'Upload failed')
      }

      const data = await response.json()
      setUploadedUrl(data.publicUrl)
    } catch (err) {
      setError(err instanceof Error ? err.message : 'Upload failed')
    } finally {
      setUploading(false)
    }
  }

  return (
    <div className="space-y-4">
      <input
        type="file"
        accept="image/*"
        onChange={handleUpload}
        disabled={uploading}
        className="file-input"
      />

      {uploading && <p>Uploading...</p>}
      {error && <p className="text-red-500">{error}</p>}
      {uploadedUrl && (
        <div>
          <p className="text-green-500">Upload successful!</p>
          <img src={uploadedUrl} alt="Uploaded" className="w-32 h-32" />
        </div>
      )}
    </div>
  )
}

Drag-and-Drop Upload

export function DragDropUpload() {
  const [dragging, setDragging] = useState(false)

  function handleDragEnter(e: React.DragEvent) {
    e.preventDefault()
    setDragging(true)
  }

  function handleDragLeave(e: React.DragEvent) {
    e.preventDefault()
    setDragging(false)
  }

  function handleDrop(e: React.DragEvent) {
    e.preventDefault()
    setDragging(false)

    const files = Array.from(e.dataTransfer.files)
    if (files.length > 0) {
      uploadFile(files[0])
    }
  }

  async function uploadFile(file: File) {
    // ... same upload logic as above
  }

  return (
    <div
      onDragEnter={handleDragEnter}
      onDragOver={(e) => e.preventDefault()}
      onDragLeave={handleDragLeave}
      onDrop={handleDrop}
      className={`border-2 border-dashed rounded-lg p-8 text-center ${
        dragging ? 'border-blue-500 bg-blue-50' : 'border-gray-300'
      }`}
    >
      <p>Drag and drop files here, or click to select</p>
      <input type="file" className="hidden" />
    </div>
  )
}

File Retrieval

Public URLs

Pattern:

https://{project}.supabase.co/storage/v1/object/public/{bucket}/{path}

Example:

const publicUrl = `${process.env.NEXT_PUBLIC_SUPABASE_URL}/storage/v1/object/public/home-portal-service-icons/12345_radarr.svg`

Usage in Components:

<img src={publicUrl} alt="Service icon" />

Signed URLs (Private Files)

For temporary access to private files:

const { data, error } = await supabase.storage
  .from('private-bucket')
  .createSignedUrl('user-id/file.pdf', 3600) // Expires in 1 hour

if (data) {
  const signedUrl = data.signedUrl
  // Use signedUrl for download or display
}

Use cases: - Temporary download links - Time-limited access to sensitive files - Sharing private files without making bucket public

Image Transformations

Supabase supports on-the-fly image transformations:

const { data } = await supabase.storage
  .from('photos')
  .getPublicUrl('photo.jpg', {
    transform: {
      width: 300,
      height: 300,
      resize: 'cover',
      format: 'webp',
      quality: 80,
    },
  })

Transformation options: - width, height - Dimensions in pixels - resize - cover, contain, fill - format - webp, avif, jpeg, png - quality - 1-100 (compression)


Database Integration

Storing File Paths

Pattern: Store the storage path in your application table

CREATE TABLE user_uploads (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id UUID NOT NULL REFERENCES auth.users(id) ON DELETE CASCADE,
  file_path TEXT NOT NULL,  -- e.g., 'user-id/12345_document.pdf'
  file_name TEXT NOT NULL,  -- Original filename
  file_size INT NOT NULL,   -- Bytes
  mime_type TEXT NOT NULL,
  uploaded_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

Storing after upload:

// After successful storage upload
const { data: uploadData } = await supabase.storage
  .from('bucket-name')
  .upload(storagePath, arrayBuffer)

// Store metadata in database
const { error: dbError } = await supabase
  .from('user_uploads')
  .insert({
    user_id: user.id,
    file_path: uploadData.path,
    file_name: file.name,
    file_size: file.size,
    mime_type: file.type,
  })

Cleanup on Deletion

Important: Delete storage files when deleting database records to prevent orphaned files.

// API route: DELETE /api/uploads/[id]

// 1. Get file path from database
const { data: upload } = await supabase
  .from('user_uploads')
  .select('file_path')
  .eq('id', uploadId)
  .eq('user_id', user.id)
  .single()

// 2. Delete from storage
if (upload) {
  await supabase.storage
    .from('bucket-name')
    .remove([upload.file_path])
}

// 3. Delete database record
await supabase
  .from('user_uploads')
  .delete()
  .eq('id', uploadId)
  .eq('user_id', user.id)

Cascade pattern for related records:

-- When service is deleted, also delete associated icon
CREATE OR REPLACE FUNCTION delete_service_icon()
RETURNS TRIGGER AS $$
BEGIN
  -- Delete icon from storage if it's a custom upload
  IF OLD.icon_url LIKE 'bucket-name/%' THEN
    -- Note: This requires a function with storage.delete permissions
    -- Alternatively, handle in application code
  END IF;
  RETURN OLD;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER on_service_delete
  BEFORE DELETE ON services
  FOR EACH ROW
  EXECUTE FUNCTION delete_service_icon();

Best practice: Handle cleanup in application code (API routes) rather than database triggers for better error handling.


Real-World Examples

Example 1: SubtitleAI Video Uploads (Private Bucket)

Bucket: subtitleai-uploads Type: Private with user folders Purpose: User-uploaded videos for subtitle generation

Migration:

INSERT INTO storage.buckets (id, name, public, file_size_limit, allowed_mime_types)
VALUES (
  'subtitleai-uploads',
  'subtitleai-uploads',
  false,  -- Private
  2147483648,  -- 2GB
  ARRAY['video/mp4', 'video/webm', 'video/ogg', 'video/quicktime', 'video/x-matroska', 'video/x-msvideo']::text[]
);

CREATE POLICY "users_upload_videos"
  ON storage.objects FOR INSERT TO authenticated
  WITH CHECK (bucket_id = 'subtitleai-uploads' AND (storage.foldername(name))[1] = auth.uid()::text);

Upload Route: src/app/api/upload/route.ts - Validates video file types - Enforces 2GB limit - Uses ${user.id}/${timestamp}_${filename} path

Reference: /root/projects/subtitleai/app/api/upload/route.ts

Example 2: home-portal Service Icons (Public Bucket)

Bucket: home-portal-service-icons Type: Public Purpose: Custom service icons for dashboard

Migration:

INSERT INTO storage.buckets (id, name, public, file_size_limit, allowed_mime_types)
VALUES (
  'home-portal-service-icons',
  'home-portal-service-icons',
  true,  -- Public
  5242880,  -- 5MB
  ARRAY['image/png', 'image/jpeg', 'image/svg+xml', 'image/webp']::text[]
);

CREATE POLICY "public_view_icons"
  ON storage.objects FOR SELECT TO anon, authenticated
  USING (bucket_id = 'home-portal-service-icons');

Icon Component:

// Checks if icon field is storage path
if (icon.startsWith('home-portal-service-icons/')) {
  const url = `${SUPABASE_URL}/storage/v1/object/public/${icon}`
  return <img src={url} />
}

Reference: See NEXT_STEPS.md in home-portal

Example 3: money-tracker Receipt Uploads (Private Bucket)

Bucket: money-tracker-receipts Type: Private with user folders Purpose: Receipt images linked to transactions

Database Integration:

CREATE TABLE transactions (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id UUID NOT NULL REFERENCES auth.users(id) ON DELETE CASCADE,
  amount DECIMAL(10,2) NOT NULL,
  description TEXT,
  receipt_path TEXT,  -- Optional: 'money-tracker-receipts/user-id/receipt.jpg'
  created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

Workflow: 1. User uploads receipt via /api/receipts endpoint 2. File stored in money-tracker-receipts/{user.id}/{timestamp}_receipt.jpg 3. Path saved in transactions.receipt_path 4. Displayed via signed URL (temporary access)


Common Patterns

Pattern 1: App-Namespaced Buckets

Naming: {app-name}-{purpose}

Why? Shared Kubernetes Supabase instance serves multiple apps. Namespacing prevents collisions.

Examples: - home-portal-service-icons - money-tracker-receipts - subtitleai-uploads - trip-planner-photos

Pattern 2: User Folder Isolation

Path: {user-uuid}/filename.ext

RLS Policy:

(storage.foldername(name))[1] = auth.uid()::text

Benefits: - Automatic user isolation - Simple RLS policies - Easy user data deletion (delete folder)

Pattern 3: Timestamp Filenames

Pattern: {timestamp}_{sanitized-original-name}.{ext}

Example: 1732300800000_vacation-photo.jpg

Benefits: - Prevents filename collisions - Chronological ordering - Preserves original name for UX

Pattern 4: Cleanup on Delete

Always delete storage files when deleting related database records:

// Get record with file path
const record = await getRecordWithFilePath(id)

// Delete from storage
if (record.file_path) {
  await supabase.storage.from('bucket').remove([record.file_path])
}

// Delete database record
await supabase.from('table').delete().eq('id', id)

Troubleshooting

Upload Fails with "Unauthorized"

Cause: User not authenticated or session expired

Solution:

// Verify auth in API route
const { data: { user }, error } = await supabase.auth.getUser()
if (error || !user) {
  return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
}

Upload Fails with "Policy violation"

Cause: RLS policy doesn't allow the operation

Debug: 1. Check user is authenticated 2. Verify bucket_id in policy matches request 3. For user folders, ensure path starts with ${user.id}/ 4. Check policy covers operation (INSERT, SELECT, DELETE, UPDATE)

Test policies:

-- Test as specific user
SET request.jwt.claims.sub = 'user-uuid';
SELECT * FROM storage.objects WHERE bucket_id = 'bucket-name';

File Not Found (404)

Cause: Incorrect public URL or file doesn't exist

Check: 1. Bucket is public (for public URLs) 2. Path is correct (no extra slashes) 3. File actually exists in storage 4. URL format: {SUPABASE_URL}/storage/v1/object/public/{bucket}/{path}

Image Doesn't Display

Cause: CORS, MIME type, or path issue

Solutions: - Verify contentType set correctly on upload - Check bucket CORS settings (usually automatic) - Use browser DevTools Network tab to see actual response - Try signed URL for private buckets

Large Files Fail to Upload

Causes: - Exceeds bucket file_size_limit - Exceeds Next.js body size limit (default 4MB) - Network timeout

Solutions:

// next.config.js - Increase body size limit
module.exports = {
  api: {
    bodyParser: {
      sizeLimit: '100mb',
    },
  },
}

-- Increase bucket limit
UPDATE storage.buckets
SET file_size_limit = 104857600  -- 100MB
WHERE id = 'bucket-name';

Orphaned Files (Files Without DB Records)

Cause: Upload succeeded but database insert failed

Prevention:

// Upload first
const { data: uploadData, error: uploadError } = await supabase.storage
  .from('bucket').upload(path, file)

if (uploadError) throw uploadError

try {
  // Then create DB record
  await supabase.from('table').insert({ file_path: uploadData.path })
} catch (dbError) {
  // Rollback: Delete uploaded file
  await supabase.storage.from('bucket').remove([uploadData.path])
  throw dbError
}


Summary

Key Takeaways

Create buckets via migrations for version control ✅ Use app-namespaced bucket names for multi-app setups ✅ Implement RLS policies matching bucket privacy (public vs private) ✅ Validate files (type, size) before upload ✅ Use user folders ({user-id}/filename) for privacy ✅ Store file paths in database for tracking ✅ Clean up storage files when deleting records ✅ Use public URLs for public buckets, signed URLs for private

Quick Reference

Task Code
Create bucket INSERT INTO storage.buckets (id, name, public, ...)
Public read policy FOR SELECT TO anon, authenticated USING (bucket_id = '...')
User folder policy (storage.foldername(name))[1] = auth.uid()::text
Upload file supabase.storage.from('bucket').upload(path, arrayBuffer, {...})
Public URL ${SUPABASE_URL}/storage/v1/object/public/{bucket}/{path}
Signed URL supabase.storage.from('bucket').createSignedUrl(path, expiresIn)
Delete file supabase.storage.from('bucket').remove([path])

Next Steps

  1. Review Supabase Integration Guide for auth setup
  2. See real examples: SubtitleAI (/root/projects/subtitleai/)
  3. Check app-specific guides: home-portal, money-tracker

Last Updated: 2025-12-16