Skip to content

RMS (Recipe Management System)

AI-powered recipe management with intelligent meal planning and ingredient optimization.


Production

  • Dashboard: http://10.89.97.TBD (pending deployment)
  • Namespace: rms (planned)
  • LoadBalancer IP: TBD

Development


Overview

RMS is a comprehensive recipe management system designed to streamline cooking, meal planning, and ingredient management. The system features:

  • AI Recipe Generation: Create recipes from ideas using multiple AI providers (Anthropic, OpenAI, OpenRouter, DeepSeek, Ollama)
  • Intelligent Ingredient Management: Track pantry inventory with automatic suggestions
  • Recipe Workflow: Idea → Draft → Complete with lineage tracking
  • Multi-User Support: Complete isolation with Row Level Security (RLS)
  • Tag System: Organize by cuisine, meal type, and dietary restrictions

Migration Status: Currently migrating from FastAPI/Vue to Next.js 16 + Supabase V1 Schema: ✅ Completed (Nov 11, 2025) V2 Features: 🚧 Planned (meal planning, shopping lists, ingredient transformations)


Tech Stack

  • Framework: Next.js 16 (App Router)
  • UI Library: React 19
  • Styling: Tailwind CSS v4
  • Database: Supabase (PostgreSQL)
  • Authentication: Supabase Auth (shared user pool)
  • AI Providers: Anthropic Claude, OpenAI GPT, OpenRouter, DeepSeek, Ollama
  • Deployment: Kubernetes (k3s) - planned

Database

Schema Design

Tables are organized in the rms PostgreSQL schema with complete RLS policies for multi-user isolation.

Core Tables (V1)

rms.recipes              -- Main recipe data with status tracking
rms.ingredients          -- Master ingredient list (shared across users)
rms.recipe_ingredients   -- Junction: recipe ↔ ingredients
rms.recipe_steps         -- Ordered cooking instructions
rms.tags                 -- Cuisine/meal type/dietary tags
rms.recipe_tags          -- Junction: recipe ↔ tags
rms.user_ingredients     -- User pantry/inventory
rms.user_settings        -- AI provider preferences

Design Decisions

1. Recipe Status Enum (Not Boolean)

recipe_status TEXT NOT NULL DEFAULT 'idea'
  CHECK (recipe_status IN ('idea', 'draft', 'complete', 'archived'))
Rationale: Provides clear state progression (idea → draft → complete → archived) instead of ambiguous boolean flags. Matches user workflow where recipes evolve through multiple stages.

2. Lineage Tracking

parent_recipe_id UUID REFERENCES rms.recipes(id) ON DELETE SET NULL
Rationale: Track which "idea" was expanded into a full recipe. Useful for understanding recipe evolution and AI generation patterns.

3. Favorites Flag

is_favorited BOOLEAN NOT NULL DEFAULT FALSE
Rationale: Simple, indexed field for quick filtering. Better UX than tag-based favorites.

4. Conditional Validation

CONSTRAINT complete_recipe_validation CHECK (
  recipe_status != 'complete' OR (
    prep_time_minutes IS NOT NULL AND
    cook_time_minutes IS NOT NULL AND
    description != ''
  )
)
Rationale: Enforce data quality only when needed. Ideas can be sparse, but complete recipes must have required fields.

5. V1-Only Approach

We deliberately built V1 core features first (recipes, ingredients, tags) and deferred V2 features (meal_plans, shopping_lists, ingredient_transformations) for later implementation.

Rationale: - Incremental Development: "Baby steps aren't shortcuts, they are building blocks" - Risk Mitigation: Validate core functionality before adding complexity - Faster Iteration: Get working system deployed sooner - Clear Milestones: V1 completion is well-defined success criteria

V2 features will be added when V1 is stable and deployed to production.

Supabase Connection

Production (Kubernetes) - Current Development Target:

NEXT_PUBLIC_SUPABASE_URL: http://10.89.97.214:8000
NEXT_PUBLIC_SUPABASE_ANON_KEY: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...

Note: We're using production Supabase for development during initial build. Sandbox environment will be created as future phase once production is mature.

See Multi-App Supabase Architecture for details on schema isolation.


Development

Access Container

# From Proxmox host
pct enter 170

# Or via SSH
ssh root@10.89.97.170

# Via host symlink
cd /root/projects/rms

Running Dev Server

# Attach to tmux session
tmux attach -t rms

# Or start manually
npm run dev

Database Migrations

CRITICAL: PostgreSQL Schema Prefixing

When working with the rms schema in k8s Supabase, you MUST use explicit schema prefixes on all database objects:

-- ✅ CORRECT - Explicit prefix
CREATE TABLE rms.recipes (...);
CREATE INDEX idx_recipes_user_id ON rms.recipes(user_id);
CREATE POLICY "Users can view own recipes" ON rms.recipes FOR SELECT ...;

-- ❌ INCORRECT - SET search_path doesn't work reliably in kubectl exec
SET search_path TO rms;
CREATE TABLE recipes (...);  -- Will create in public schema!

Why: The SET search_path TO rms; command doesn't persist properly when piping SQL through kubectl exec psql. Always use explicit rms. prefixes to ensure objects are created in the correct schema.

Migration Process:

# 1. Create migration file in LXC 170
cd /root/rms
vim supabase/migrations/YYYYMMDDHHMMSS_migration_name.sql

# 2. Ensure all objects have explicit rms. prefix
# 3. Apply to k8s Supabase
kubectl exec -n supabase postgres-0 -- psql -U postgres -d postgres < supabase/migrations/YYYYMMDDHHMMSS_migration_name.sql

# 4. Verify tables created
kubectl exec -n supabase postgres-0 -- psql -U postgres -d postgres -c "\\dt rms.*"

# 5. Verify indexes
kubectl exec -n supabase postgres-0 -- psql -U postgres -d postgres -c "SELECT indexname, tablename FROM pg_indexes WHERE schemaname = 'rms' ORDER BY tablename, indexname;"

# 6. Commit migration to Git
git add supabase/migrations/
git commit -m "Add migration: migration_name"
git push

Initial Schema Setup (Completed)

The V1 schema was successfully applied on November 11, 2025:

# Applied migration
kubectl exec -n supabase postgres-0 -- psql -U postgres -d postgres < /tmp/rms_v1_schema.sql

# Verification showed:
# ✅ 8 tables created in rms schema
# ✅ 31 indexes for query performance
# ✅ 24 RLS policies for multi-user isolation
# ✅ 4 helper functions (ingredient_count, step_count, is_complete, update_timestamp)
# ✅ 24 tags seeded (cuisine, meal_type, dietary)
# ✅ 20 common ingredients seeded

Migration file stored at: supabase/migrations/20251111000001_v1_enhanced_schema.sql


AI Provider Architecture

Overview

RMS uses a provider-agnostic AI abstraction layer that supports multiple AI services with automatic fallback, retry logic, and cost optimization.

Supported Providers: - Anthropic Claude (direct API, prompt caching, structured outputs) - OpenRouter (unified API for multiple models with price/speed routing)

Location: /lib/ai/providers/

Implementation Status: ✅ Complete (Nov 11, 2025)

Provider Abstraction Layer

Base Provider (base.ts)

All providers extend BaseAIProvider which provides:

  • Retry Logic: Exponential/linear backoff (max 3 attempts by default)
  • Error Classification: Automatic retryable vs non-retryable detection
  • Retryable: 429 (rate limit), 5xx (server errors), network timeouts
  • Non-retryable: 401 (auth), 400 (bad request), validation errors
  • Usage Tracking: Token counts, duration, cost estimation
  • Observability: Structured logging for debugging and monitoring

Factory Pattern (factory.ts)

Create providers dynamically with multiple strategies:

import { AIProviderFactory } from '@/lib/ai';

// Create from explicit config
const provider = AIProviderFactory.create({
  providerType: 'anthropic',
  apiKey: process.env.ANTHROPIC_API_KEY!,
  model: 'claude-sonnet-4-5-20250929'
});

// Create from environment variables (auto-detects DEFAULT_AI_PROVIDER)
const provider = AIProviderFactory.createFromEnv();

// Create with fallback chain (tries providers in order until success)
const provider = await AIProviderFactory.createWithFallback([
  { providerType: 'anthropic', apiKey: process.env.ANTHROPIC_API_KEY! },
  { providerType: 'openrouter', apiKey: process.env.OPENROUTER_API_KEY! }
]);

Benefits: - 99.9% uptime with automatic failover - Transparent to calling code - Easy provider switching for A/B testing

Anthropic Provider Features

Prompt Caching (90% Cost Reduction)

Anthropic's prompt caching stores repeated content for 5 minutes, dramatically reducing costs:

const result = await provider.generateCompletion(
  userPrompt,
  systemPrompt,  // This gets cached!
  {
    enableCaching: true,  // Enable caching
    maxTokens: 4096
  }
);

// First call: Full cost
// Subsequent calls within 5 min: 90% cheaper, 85% faster

What Gets Cached: - System prompts (cooking principles, format instructions) - Available ingredient lists - Tag definitions - JSON schema specifications

Performance: - Write: $3.75 per 1M tokens (cached for 5 min) - Read: $0.30 per 1M tokens (vs $3.00 uncached) - Latency: 85% reduction on cache hits

Structured Output (Type-Safe Responses)

Use tool calling for guaranteed JSON structure:

import { z } from 'zod';

const RecipeSchema = z.object({
  title: z.string(),
  ingredients: z.array(z.object({
    name: z.string(),
    quantity: z.string(),
    unit: z.string()
  })),
  steps: z.array(z.object({
    stepNumber: z.number(),
    instruction: z.string()
  }))
});

// Convert Zod schema to JSON schema
const jsonSchema = zodToJsonSchema(RecipeSchema);

const result = await provider.generateCompletion(
  prompt,
  systemPrompt,
  { schema: jsonSchema }  // Forces structured output
);

// Result is guaranteed to match schema (or throws error)
const recipe = JSON.parse(result);

Benefits: - No manual JSON extraction needed - Built-in validation - Type-safe responses - Reduced token usage (no format instructions needed)

Usage Stats Tracking

Every generation logs comprehensive metrics:

// Console output after each generation:
{
  promptTokens: 1250,
  completionTokens: 850,
  totalTokens: 2100,
  cacheReadTokens: 1000,      // Tokens read from cache
  cacheCreationTokens: 0,     // Tokens written to cache (first call only)
  cost: 0.0124,               // Total cost in USD
  duration: 1840,             // Milliseconds
  cacheHitRate: '80.0%'       // Cache efficiency
}

OpenRouter Provider Features

Model Routing Shortcuts

OpenRouter supports routing modifiers to optimize cost or speed:

import { OpenRouterHelpers } from '@/lib/ai';

// Use cheapest provider for this model
const model = OpenRouterHelpers.useCheapest('anthropic/claude-sonnet-4-5');
// Returns: 'anthropic/claude-sonnet-4-5:floor'

// Use fastest provider for this model
const model = OpenRouterHelpers.useFastest('openai/gpt-4o');
// Returns: 'openai/gpt-4o:nitro'

Routing Suffixes: - :floor - Sort by price (cheapest first) - :nitro - Sort by throughput (fastest first) - No suffix - Default routing (balanced)

Use Cases: - Development: Use :floor for cost savings - Production: Use :nitro for user-facing features - Background jobs: Use :floor for async processing

Site Attribution

OpenRouter requires attribution headers for analytics and rate limits:

// Automatically added by provider
{
  'HTTP-Referer': process.env.NEXT_PUBLIC_SITE_URL,  // Your site URL
  'X-Title': 'RMS Recipe Management'                  // App name
}

Benefits: - Better rate limits for attributed traffic - Analytics in OpenRouter dashboard - Helps with debugging and support

Provider Selection Strategy

Development: Use OpenRouter for flexibility and cost - Access to 100+ models through single API - Easy switching between GPT-4, Claude, Llama, etc. - Pay-per-use, no subscriptions - Model routing for cost optimization

Production: Use Anthropic direct for performance - Prompt caching reduces costs by 90% - Extended thinking for complex recipe generation - Structured outputs for reliable JSON - Better rate limits

Recommended Fallback Chain:

await AIProviderFactory.createWithFallback([
  { providerType: 'anthropic', apiKey: env.ANTHROPIC_API_KEY },  // Primary
  { providerType: 'openrouter', apiKey: env.OPENROUTER_API_KEY }  // Fallback
]);

Configuration

Environment Variables

Set environment variables for desired providers:

# Primary provider (Anthropic)
ANTHROPIC_API_KEY=sk-ant-...
DEFAULT_AI_PROVIDER=anthropic
DEFAULT_AI_MODEL=claude-sonnet-4-5-20250929

# Fallback provider (OpenRouter)
OPENROUTER_API_KEY=sk-or-...

# Optional: Site attribution
NEXT_PUBLIC_SITE_URL=https://rms.example.com

User Preferences

User-specific settings stored in rms.user_settings table override defaults:

SELECT ai_provider, ai_model
FROM rms.user_settings
WHERE user_id = auth.uid();

Allows per-user provider selection (power users can choose GPT-4, budget users can choose cheaper models).

Usage Example

Complete example with error handling:

import { AIProviderFactory } from '@/lib/ai';

async function generateRecipe(prompt: string) {
  const provider = AIProviderFactory.createFromEnv();

  try {
    const recipeJson = await provider.generateCompletion(
      `Generate a pasta recipe with chicken and tomatoes`,
      `You are a professional chef assistant...`,
      {
        maxTokens: 2048,
        temperature: 0.7,
        enableCaching: true,  // Anthropic only
        retry: {
          maxRetries: 3,
          backoff: 'exponential'  // 1s, 2s, 4s delays
        }
      }
    );

    // Parse and validate
    const recipe = JSON.parse(recipeJson);
    return recipe;

  } catch (error) {
    if (error instanceof AIProviderError) {
      console.error(`AI generation failed:`, {
        provider: error.provider,
        retryable: error.retryable,
        statusCode: error.statusCode
      });

      // Handle gracefully (show fallback recipe, retry with different provider, etc.)
    }
    throw error;
  } finally {
    await provider.close();  // Clean up resources
  }
}

Error Handling

The provider layer throws AIProviderError with detailed context:

Properties: - provider: Which provider failed ('anthropic' | 'openrouter') - statusCode: HTTP status code (if applicable) - retryable: Whether retry is recommended - originalError: Underlying error for debugging - message: Human-readable error description

Example Error Handling:

try {
  const result = await provider.generateCompletion(prompt);
} catch (error) {
  if (error instanceof AIProviderError) {
    if (error.retryable) {
      // Retry with backoff (already done automatically)
      // Or try fallback provider
    } else {
      // Report to user (invalid API key, bad request, etc.)
      throw new Error(`Cannot generate recipe: ${error.message}`);
    }
  }
}

Testing Providers

Test provider connections before using them:

// Test connection without creating persistent instance
const result = await AIProviderFactory.testConnection({
  providerType: 'anthropic',
  apiKey: process.env.ANTHROPIC_API_KEY!
});

console.log(result);
// {
//   success: true,
//   message: 'Successfully connected to Anthropic (claude-sonnet-4-5-20250929)',
//   details: {
//     provider: 'anthropic',
//     model: 'claude-sonnet-4-5-20250929',
//     latency: 245  // milliseconds
//   }
// }

Use Cases: - Admin settings validation (test API key before saving) - Health checks (verify providers are operational) - Debugging (diagnose connectivity issues)

Cost Optimization

Anthropic Prompt Caching: - First call: ~$0.0124 for 4K tokens - Cached calls: ~$0.0012 for 4K tokens (90% savings) - Cache duration: 5 minutes - ROI: Pays for itself after 2-3 cached calls

OpenRouter :floor Routing: - Automatically uses cheapest provider - Can save 50-70% vs direct API - No code changes required

Combined Strategy: 1. Use Anthropic with caching for user-facing features 2. Use OpenRouter :floor for background jobs 3. Cache system prompts aggressively 4. Monitor usage stats and adjust

Estimated Costs (4K token recipe): - Anthropic (uncached): $0.012 - Anthropic (cached): $0.001 - OpenRouter (floor): $0.005-0.008 - OpenRouter (nitro): $0.015-0.020


Features

Current Features (V1)

Recipe Management

  • Recipe Status Workflow: idea → draft → complete → archived
  • Favorites: Quick-access starred recipes
  • Lineage Tracking: See which idea became which full recipe
  • Soft Delete: Deleted recipes retained for recovery
  • Image Support: Recipe photos (via image_url field)

AI Recipe Generation

  • Multiple Providers: Anthropic, OpenAI, OpenRouter, DeepSeek, Ollama support
  • 3 Generation Types:
  • Generate Ideas (4 quick concepts)
  • Generate Full Recipe (complete with steps)
  • Enhance Recipe (deep research with web search - Anthropic only)
  • User Preferences: Customize by cuisine, dietary restrictions, skill level, max prep time

Ingredient System

  • Master Ingredient List: Shared across all users
  • Pantry Tracking: User-specific inventory with quantities
  • Smart Suggestions: Common ingredients pre-seeded
  • Flexible Quantities: Text-based for natural measurements ("2 cups", "1 bunch")

Tag Organization

  • 3 Tag Types:
  • Cuisine (Italian, Asian, Mexican, Mediterranean, etc.)
  • Meal Type (Breakfast, Lunch, Dinner, Dessert, Snack, Appetizer)
  • Dietary (Vegetarian, Vegan, Gluten-Free, Keto, Paleo, etc.)
  • Multi-Tag Support: Recipes can have multiple tags
  • Searchable: Filter recipes by any combination of tags

Planned Features (V2)

  • Meal Planning: Weekly meal schedules with calendar view
  • Shopping Lists: Auto-generated from meal plans with category grouping
  • Ingredient Transformations: Track cream → butter + buttermilk patterns
  • Timeline Optimization: Cooking sequence with prep parallelization
  • Nutrition Info: Calorie and macro tracking
  • Recipe Scaling: Automatic serving size adjustment
  • Import/Export: Recipe sharing and backup

API Endpoints (Planned)

All API endpoints will be Next.js API routes with Supabase integration.

Recipe Endpoints

// Core CRUD
GET    /api/recipes              // List user's recipes
POST   /api/recipes              // Create recipe
GET    /api/recipes/[id]         // Get single recipe
PUT    /api/recipes/[id]         // Update recipe
DELETE /api/recipes/[id]         // Soft delete recipe

// AI Generation
POST   /api/ai/generate-ideas    // Generate 4 recipe ideas
POST   /api/ai/generate-full     // Generate complete recipe
POST   /api/ai/enhance-recipe    // Deep research + web search

Ingredient Endpoints

GET    /api/ingredients          // List all ingredients
POST   /api/ingredients          // Add ingredient to master list
GET    /api/inventory            // User's pantry
POST   /api/inventory            // Add to pantry
PUT    /api/inventory/[id]       // Update quantity
DELETE /api/inventory/[id]       // Remove from pantry

Tag Endpoints

GET    /api/tags                 // List all tags
GET    /api/tags?type=cuisine    // Filter by tag type

Configuration

Environment Variables

Required:

# Supabase
NEXT_PUBLIC_SUPABASE_URL=http://10.89.97.214:8000
NEXT_PUBLIC_SUPABASE_ANON_KEY=<shared-anon-key>
SUPABASE_SERVICE_ROLE_KEY=<service-role-key>

# AI Provider (at least one)
OPENROUTER_API_KEY=<your-key>
DEFAULT_AI_PROVIDER=openrouter
DEFAULT_AI_MODEL=openai/gpt-4o-mini

Optional AI Providers:

ANTHROPIC_API_KEY=<your-key>
OPENAI_API_KEY=<your-key>
DEEPSEEK_API_KEY=<your-key>
OLLAMA_BASE_URL=http://localhost:11434

User Settings

AI preferences are stored per-user in rms.user_settings:

{
  ai_provider: 'openrouter' | 'openai' | 'anthropic' | 'ollama' | 'deepseek',
  ai_model: string,
  default_servings: number,
  preferred_cuisines: string[],
  dietary_restrictions: string[],
  skill_level: 'beginner' | 'intermediate' | 'advanced',
  max_prep_time: number | null
}

Migration Notes

From FastAPI/Vue to Next.js/Supabase

What We're Keeping: - ✅ All recipe generation logic (prompt building, JSON parsing) - ✅ AI provider factory (6 providers with fallback) - ✅ Ingredient normalization and matching - ✅ Database schema design patterns - ✅ Response wrapper pattern { success, message, data }

What We're Rewriting: - ♻️ Authentication (JWT → Supabase Auth) - ♻️ Image handling (file endpoint → Supabase Storage) - ♻️ Admin dashboard (Vue → React/Next.js) - ♻️ Logging (Loki → Vercel/Sentry)

What We're Discarding: - ❌ OpenTofu/Ansible infrastructure (LXC provisioning) - ❌ Makefile deployment scripts (moving to k8s) - ❌ Python virtual environment - ❌ Alembic migrations (converted to Supabase migrations)

Code Reduction: ~4,300 lines of infrastructure code eliminated

See /root/rms/NEXTJS_MIGRATION_CHECKLIST.md and /root/rms/RMS_CODEBASE_AUDIT.md for detailed analysis.


Architecture

Production (Kubernetes) - Planned

Internet/Network
MetalLB LoadBalancer (10.89.97.TBD:80)
rms Service (ClusterIP)
rms Pod(s)
Supabase API Gateway (Kong: 10.89.97.214:8000)
PostgreSQL (rms schema)

Development (LXC 170)

Developer
LXC 170 (10.89.97.170:3000)
Next.js Dev Server
k8s Supabase (10.89.97.214:8000)
PostgreSQL (rms schema)

Note: Development currently points to production Supabase. Sandbox environment is planned for future phase.


Helper Functions

The schema includes 4 PostgreSQL functions for computed fields:

Recipe Metrics

-- Get ingredient count for a recipe
SELECT rms.get_recipe_ingredient_count(recipe_id);

-- Get step count for a recipe
SELECT rms.get_recipe_step_count(recipe_id);

-- Check if recipe has complete data (ingredients + steps)
SELECT rms.is_recipe_complete(recipe_id);

Automatic Timestamps

-- Triggers update_updated_at_column() on:
-- - rms.recipes (before update)
-- - rms.user_settings (before update)

Troubleshooting

Database Schema Issues

Tables not appearing in rms schema:

# Check if schema exists
kubectl exec -n supabase postgres-0 -- psql -U postgres -d postgres -c "\\dn"

# Check tables in rms schema
kubectl exec -n supabase postgres-0 -- psql -U postgres -d postgres -c "\\dt rms.*"

# If missing, ensure migration uses explicit rms. prefixes
# DO NOT rely on SET search_path in kubectl exec context

RLS policies blocking queries:

# Test as authenticated user
curl -H "Authorization: Bearer <user-token>" \
     -H "apikey: <anon-key>" \
     http://10.89.97.214:8000/rest/v1/recipes

# Check if user_id matches auth.uid()
kubectl exec -n supabase postgres-0 -- psql -U postgres -d postgres -c \
  "SELECT id, user_id FROM rms.recipes WHERE user_id = '<user-uuid>';"

Supabase Connection Issues

Cannot connect to k8s Supabase:

# Check Supabase services running
kubectl get pods -n supabase

# Verify PostgREST has rms schema enabled
kubectl get configmap -n supabase rest-config -o yaml | grep PGRST_DB_SCHEMA

# Expected: "home_portal,money_tracker,rms,public,storage,graphql_public"

# Test API connectivity
curl http://10.89.97.214:8000/rest/v1/

Schema not exposed via PostgREST:

# Add rms to PGRST_DB_SCHEMA in rest-config ConfigMap
kubectl edit configmap -n supabase rest-config

# Restart PostgREST to pick up changes
kubectl rollout restart deployment -n supabase rest

Development Environment

Dev server won't start:

# Check if port 3000 is in use
lsof -i :3000

# Check tmux session
tmux ls
tmux attach -t rms

# Check environment variables
cd /root/projects/rms
cat .env.local


Deployment Checklist

When ready to deploy to production k8s:

  • [ ] Complete Next.js application with all V1 features
  • [ ] Generate TypeScript types from database schema
  • [ ] Implement all API routes (recipes, ingredients, tags, AI generation)
  • [ ] Build React UI components
  • [ ] Set up Supabase Auth integration
  • [ ] Test RLS policies with multiple users
  • [ ] Create Dockerfile
  • [ ] Build and push image to k8s registry
  • [ ] Create k8s manifests (namespace, configmap, secrets, deployment, service)
  • [ ] Deploy with LoadBalancer
  • [ ] Update documentation with production URLs
  • [ ] Create monitoring/observability setup

Support


Created: November 11, 2025 V1 Schema Status: ✅ Completed Development Status: 🚧 In Progress Production Status: ⏳ Pending