Skip to Content
ConsoleDevelopmentConfiguration

Configuration

Complete configuration guide for Earna AI Console. This covers GPT-4o integration, alternative AI models via Vercel AI SDK v5, Supabase backend, HeyGen avatars, real-time voice features, and TypeScript strict mode.

Environment Variables

Earna AI Console requires several environment variables for proper operation. Create a .env.local file in your project root:

# Primary AI Model - OpenAI GPT-4o OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Alternative AI Models (Optional) ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx GOOGLE_GENERATIVE_AI_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxx MISTRAL_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxx GROQ_API_KEY=gsk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx PERPLEXITY_API_KEY=pplx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxx XAI_API_KEY=xai-xxxxxxxxxxxxxxxxxxxxxxxxxxxxx # Supabase Backend NEXT_PUBLIC_SUPABASE_URL=https://your-project.supabase.co NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... # HeyGen Avatars HEYGEN_API_KEY=your_heygen_api_key NEXT_PUBLIC_HEYGEN_APP_ID=your_heygen_app_id # Real-time Voice ELEVENLABS_API_KEY=your_elevenlabs_api_key DEEPGRAM_API_KEY=your_deepgram_api_key # Application NODE_ENV=production NEXT_PUBLIC_APP_URL=https://your-domain.com # Analytics & Monitoring VERCEL_ANALYTICS_ID=your_vercel_analytics_id SENTRY_DSN=https://xxxx@sentry.io/project # Feature Flags ENABLE_GPT4O_VISION=true ENABLE_ALTERNATIVE_MODELS=true ENABLE_AVATARS=true ENABLE_VOICE_MODE=true ENABLE_FILE_UPLOADS=true ENABLE_DEBUG_MODE=false

Never commit your .env.local file to version control. Add it to your .gitignore file.

GPT-4o Configuration

Primary Model Setup

Configure GPT-4o as the primary AI model:

// lib/ai/openai.ts import { openai } from '@ai-sdk/openai'; import { streamText, generateText } from 'ai'; export const gpt4oConfig = { model: 'gpt-4o', temperature: 0.7, maxTokens: 4096, topP: 1, frequencyPenalty: 0, presencePenalty: 0, systemPrompt: `You are a helpful AI assistant powered by GPT-4o. Provide clear, accurate, and helpful responses. You have access to vision capabilities for analyzing images. You can execute tools and functions when needed.` }; export async function sendMessage( message: string, images?: string[], history: any[] = [] ) { const messages = [ { role: 'system', content: gpt4oConfig.systemPrompt }, ...history, { role: 'user', content: images ? [ { type: 'text', text: message }, ...images.map(url => ({ type: 'image', image: url })) ] : message } ]; const response = await generateText({ model: openai('gpt-4o'), messages, temperature: gpt4oConfig.temperature, maxTokens: gpt4oConfig.maxTokens }); return response; }

Alternative Models Configuration

Multi-Model Support via Vercel AI SDK

Configure alternative AI models alongside GPT-4o:

// lib/ai/models.ts import { openai } from '@ai-sdk/openai'; import { anthropic } from '@ai-sdk/anthropic'; import { google } from '@ai-sdk/google'; import { mistral } from '@ai-sdk/mistral'; import { xai } from '@ai-sdk/xai'; import { createOllama } from 'ollama-ai-provider'; export const models = { // Primary Model 'gpt-4o': { provider: openai('gpt-4o'), name: 'GPT-4o', features: ['chat', 'vision', 'tools', 'voice'], contextLength: 128000, primary: true }, // Alternative Models 'claude-3-opus': { provider: anthropic('claude-3-opus-20240229'), name: 'Claude 3 Opus', features: ['chat', 'vision', 'tools'], contextLength: 200000 }, 'gemini-1.5-pro': { provider: google('gemini-1.5-pro-latest'), name: 'Gemini 1.5 Pro', features: ['chat', 'vision', 'tools'], contextLength: 1000000 }, 'mistral-large': { provider: mistral('mistral-large-latest'), name: 'Mistral Large', features: ['chat', 'tools'], contextLength: 32000 }, 'grok-2': { provider: xai('grok-2-latest'), name: 'xAI Grok 2', features: ['chat', 'tools'], contextLength: 100000 }, // Local Model 'llama-3.1': { provider: createOllama()('llama3.1:70b'), name: 'Llama 3.1 70B', features: ['chat'], contextLength: 8192, local: true } }; export function getModel(modelId: string) { return models[modelId] || models['gpt-4o']; } export function isModelAvailable(modelId: string): boolean { const model = models[modelId]; if (!model) return false; // Check if API key is configured switch (modelId) { case 'gpt-4o': return !!process.env.OPENAI_API_KEY; case 'claude-3-opus': return !!process.env.ANTHROPIC_API_KEY; case 'gemini-1.5-pro': return !!process.env.GOOGLE_GENERATIVE_AI_API_KEY; default: return true; } }

Supabase Configuration

Database & Authentication Setup

Configure Supabase for backend services:

// lib/supabase/client.ts import { createBrowserClient } from '@supabase/ssr'; import { Database } from '@/types/database'; export function createClient() { return createBrowserClient<Database>( process.env.NEXT_PUBLIC_SUPABASE_URL!, process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!, { auth: { persistSession: true, autoRefreshToken: true, detectSessionInUrl: true }, global: { headers: { 'x-application-name': 'earna-ai-console' } } } ); } // Server client for API routes import { createServerClient } from '@supabase/ssr'; import { cookies } from 'next/headers'; export function createServerSupabaseClient() { const cookieStore = cookies(); return createServerClient<Database>( process.env.NEXT_PUBLIC_SUPABASE_URL!, process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!, { cookies: { get(name: string) { return cookieStore.get(name)?.value; }, set(name: string, value: string, options: any) { cookieStore.set({ name, value, ...options }); }, remove(name: string, options: any) { cookieStore.set({ name, value: '', ...options }); } } } ); }

Avatar Configuration

HeyGen Streaming Avatars

Configure interactive avatars:

// lib/avatars/heygen.ts import { StreamingAvatar, AvatarQuality } from '@heygen/streaming-avatar'; export const heygenConfig = { apiKey: process.env.HEYGEN_API_KEY!, appId: process.env.NEXT_PUBLIC_HEYGEN_APP_ID!, defaultAvatar: 'josh_lite3_20230714', quality: AvatarQuality.High, voice: { voiceId: 'en-US-BrianNeural', rate: 1.0, pitch: 0, emotion: 'friendly' } }; export async function createAvatarSession(avatarId?: string) { const response = await fetch('https://api.heygen.com/v1/streaming.new', { method: 'POST', headers: { 'X-Api-Key': heygenConfig.apiKey, 'Content-Type': 'application/json' }, body: JSON.stringify({ avatar_id: avatarId || heygenConfig.defaultAvatar, voice: heygenConfig.voice, quality: heygenConfig.quality }) }); const data = await response.json(); return { sessionId: data.session_id, accessToken: data.access_token, iceServers: data.ice_servers }; } export class AvatarController { private avatar: StreamingAvatar; constructor(token: string) { this.avatar = new StreamingAvatar({ token }); } async initialize(videoElement: HTMLVideoElement) { await this.avatar.init(videoElement); } async speak(text: string) { await this.avatar.speak({ text, taskType: 'talk', taskMode: 'sync' }); } async interrupt() { await this.avatar.interrupt(); } async terminate() { await this.avatar.terminate(); } }

Voice Configuration

GPT-4o Realtime Voice

Configure real-time voice conversations:

// lib/voice/realtime.ts export const realtimeConfig = { url: 'wss://api.openai.com/v1/realtime', model: 'gpt-4o-realtime-preview', voice: 'alloy', // alloy, echo, fable, onyx, nova, shimmer turnDetection: { type: 'server_vad', threshold: 0.5, prefixPaddingMs: 300, silenceDurationMs: 500 } }; export async function createRealtimeSession() { // Get ephemeral key const response = await fetch('/api/realtime-session', { method: 'POST' }); const { key, url } = await response.json(); // Connect to WebSocket const ws = new WebSocket(url); ws.onopen = () => { ws.send(JSON.stringify({ type: 'session.update', session: { modalities: ['text', 'audio'], voice: realtimeConfig.voice, instructions: 'You are a helpful assistant.', turn_detection: realtimeConfig.turnDetection } })); }; return ws; } // Text-to-Speech with OpenAI export async function textToSpeech(text: string, voice = 'nova') { const response = await fetch('https://api.openai.com/v1/audio/speech', { method: 'POST', headers: { 'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`, 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'tts-1', input: text, voice, response_format: 'mp3' }) }); return response.blob(); } // Speech-to-Text with Whisper export async function speechToText(audioBlob: Blob) { const formData = new FormData(); formData.append('file', audioBlob, 'audio.webm'); formData.append('model', 'whisper-1'); const response = await fetch('https://api.openai.com/v1/audio/transcriptions', { method: 'POST', headers: { 'Authorization': `Bearer ${process.env.OPENAI_API_KEY}` }, body: formData }); const { text } = await response.json(); return text; }

Application Configuration

Next.js 15 Configuration

Configure Next.js for optimal performance:

// next.config.js /** @type {import('next').NextConfig} */ const nextConfig = { experimental: { serverComponentsExternalPackages: ['@supabase/ssr'], }, images: { remotePatterns: [ { protocol: 'https', hostname: '**.supabase.co', pathname: '/storage/v1/object/public/**', }, { protocol: 'https', hostname: 'api.heygen.com', } ], }, async headers() { return [ { source: '/api/:path*', headers: [ { key: 'Access-Control-Allow-Origin', value: '*' }, { key: 'Access-Control-Allow-Methods', value: 'GET,OPTIONS,PATCH,DELETE,POST,PUT' }, { key: 'Access-Control-Allow-Headers', value: 'X-CSRF-Token, X-Requested-With, Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version, Authorization' }, ], }, ]; }, async rewrites() { return [ { source: '/realtime/:path*', destination: '/api/realtime/:path*', }, ]; }, }; module.exports = nextConfig;

TypeScript Configuration

Strict Mode Enabled: We use TypeScript strict mode for maximum type safety and AI SDK v5 compatibility.

// tsconfig.json { "compilerOptions": { "target": "ES2022", "lib": ["dom", "dom.iterable", "esnext"], "allowJs": true, "skipLibCheck": true, "strict": true, // Enables all strict type checking options "strictNullChecks": true, // Enable strict null checks "strictFunctionTypes": true, // Enable strict checking of function types "strictBindCallApply": true, // Enable strict 'bind', 'call', and 'apply' methods "strictPropertyInitialization": true, // Enable strict checking of property initialization "noImplicitThis": true, // Raise error on 'this' expressions with an implied 'any' type "alwaysStrict": true, // Ensure 'use strict' is always emitted "noImplicitAny": true, // Raise error on expressions with an implied 'any' type "noEmit": true, "esModuleInterop": true, "module": "esnext", "moduleResolution": "bundler", "resolveJsonModule": true, "isolatedModules": true, "jsx": "preserve", "incremental": true, "plugins": [ { "name": "next" } ], "baseUrl": ".", "paths": { "@/*": ["./*"], "@/components/*": ["./components/*"], "@/lib/*": ["./lib/*"], "@/app/*": ["./app/*"] } }, "include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"], "exclude": ["node_modules"] }

Performance Configuration

Caching Strategy

// lib/config/cache.ts export const cacheConfig = { // AI responses aiResponses: { ttl: 60 * 60, // 1 hour maxSize: 1000 }, // User sessions userSessions: { ttl: 24 * 60 * 60, // 24 hours maxSize: 500 }, // File uploads fileUploads: { ttl: 7 * 24 * 60 * 60, // 7 days maxSize: 100 } }; // Edge config for Vercel export const edgeConfig = { runtime: 'edge', regions: ['iad1', 'sfo1', 'sin1'], maxDuration: 25 // seconds };

Security Configuration

// lib/config/security.ts export const securityConfig = { // Rate limiting rateLimiting: { chat: { windowMs: 60 * 1000, // 1 minute maxFree: 10, // 10 messages per minute (free) maxPro: 100, // 100 messages per minute (pro) }, api: { windowMs: 15 * 60 * 1000, // 15 minutes max: 100 } }, // CORS settings cors: { origin: process.env.NODE_ENV === 'production' ? ['https://earna.sh', 'https://app.earna.sh'] : ['http://localhost:3000', 'http://localhost:3001'], credentials: true }, // Content Security Policy csp: { defaultSrc: ["'self'"], scriptSrc: ["'self'", "'unsafe-eval'", "'unsafe-inline'"], styleSrc: ["'self'", "'unsafe-inline'"], imgSrc: ["'self'", "data:", "https:", "blob:"], connectSrc: [ "'self'", "https://api.openai.com", "wss://api.openai.com", "https://*.supabase.co", "https://api.heygen.com" ], mediaSrc: ["'self'", "blob:", "https:"], frameSrc: ["'self'", "https://www.heygen.com"] } };

Feature Flags

Control feature availability:

// lib/config/features.ts export interface FeatureFlags { enableGPT4oVision: boolean; enableAlternativeModels: boolean; enableAvatars: boolean; enableVoiceMode: boolean; enableFileUploads: boolean; enableCodeExecution: boolean; enableWebSearch: boolean; } export const featureFlags: FeatureFlags = { enableGPT4oVision: process.env.ENABLE_GPT4O_VISION === 'true', enableAlternativeModels: process.env.ENABLE_ALTERNATIVE_MODELS === 'true', enableAvatars: process.env.ENABLE_AVATARS === 'true', enableVoiceMode: process.env.ENABLE_VOICE_MODE === 'true', enableFileUploads: process.env.ENABLE_FILE_UPLOADS === 'true', enableCodeExecution: false, // Beta feature enableWebSearch: false // Coming soon }; export function isFeatureEnabled(flag: keyof FeatureFlags): boolean { if (process.env.NODE_ENV === 'development') { return true; // All features enabled in development } return featureFlags[flag]; }

Daily Limits Configuration

// lib/config/limits.ts export const subscriptionLimits = { free: { dailyMessages: 10, avatarMinutes: 5, voiceMinutes: 10, maxFileSize: 5 * 1024 * 1024, // 5 MB maxFilesPerMessage: 1 }, pro: { dailyMessages: 100, avatarMinutes: Infinity, voiceMinutes: Infinity, maxFileSize: 50 * 1024 * 1024, // 50 MB maxFilesPerMessage: 5 }, enterprise: { dailyMessages: Infinity, avatarMinutes: Infinity, voiceMinutes: Infinity, maxFileSize: 500 * 1024 * 1024, // 500 MB maxFilesPerMessage: 10 } }; export async function checkDailyLimit(userId: string): Promise<boolean> { const supabase = createClient(); const { data: user } = await supabase .from('users') .select('subscription_tier, daily_message_limit, messages_used_today') .eq('id', userId) .single(); if (!user) return false; return user.messages_used_today < user.daily_message_limit; }

For production deployment, ensure all API keys are properly secured and never expose them in client-side code. Use environment variables and secure secret management systems.

Troubleshooting

Common Configuration Issues

  1. GPT-4o API Not Working

    • Verify OPENAI_API_KEY is correctly set
    • Check API key has access to GPT-4o model
    • Ensure proper error handling for streaming responses
    • Monitor rate limits (10,000 requests/min for GPT-4o)
  2. Alternative Models Not Available

    • Validate respective API keys (ANTHROPIC_API_KEY, etc.)
    • Check model availability in your region
    • Verify Vercel AI SDK provider configuration
    • Test fallback to GPT-4o is working
  3. Supabase Connection Issues

    • Check SUPABASE_URL and SUPABASE_ANON_KEY format
    • Ensure Row Level Security policies are configured
    • Verify database migrations are applied
    • Check Realtime subscriptions are enabled
  4. Avatar/Voice Features Not Working

    • Validate HeyGen API credentials
    • Check WebRTC connectivity for avatars
    • Ensure microphone permissions for voice
    • Verify OpenAI Realtime API access
  5. File Upload Issues

    • Check Supabase Storage bucket configuration
    • Verify file size limits per subscription tier
    • Ensure proper CORS configuration
    • Test file type validation

For more help, see our Troubleshooting Guide or visit GitHub Issues .

Last updated on