Skip to Content
ConsoleQuick Reference

Quick Reference

Essential commands, API endpoints, and configurations for Earna AI Console.

Common Commands

Development

# From monorepo root (recommended) pnpm turbo dev --filter=console pnpm turbo build --filter=console pnpm turbo lint --filter=console pnpm turbo typecheck --filter=console # From console directory cd console pnpm dev pnpm build pnpm start pnpm type-check pnpm lint pnpm format

Database

# Generate Supabase types pnpm generate:types # Run migrations pnpm db:migrate # Reset database pnpm db:reset # Seed database pnpm db:seed

Deployment

# Deploy to Vercel vercel --prod # Deploy preview vercel # Check deployment status vercel logs # Set environment variable vercel env add VARIABLE_NAME production

API Endpoints

Chat Operations

MethodEndpointDescription
POST/api/chatSend message and stream response
GET/api/chat/historyGet conversation history
DELETE/api/chat/[id]Delete conversation
POST/api/chat/titleGenerate chat title

AI Models

MethodEndpointDescription
GET/api/modelsList available models
POST/api/models/switchSwitch active model
GET/api/models/[id]/statusCheck model availability

Voice & Avatar

MethodEndpointDescription
POST/api/ttsText-to-speech synthesis
POST/api/avatar/sessionCreate avatar session
POST/api/avatar/speakMake avatar speak
WS/api/voice/streamReal-time voice streaming

Environment Variables

Required

# Supabase (Required) NEXT_PUBLIC_SUPABASE_URL=https://xxx.supabase.co NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJ... SUPABASE_SERVICE_ROLE_KEY=eyJ... # OpenAI (Required for GPT-4o) OPENAI_API_KEY=sk-proj-...

Optional AI Providers

# Alternative AI Models ANTHROPIC_API_KEY=sk-ant-api... GOOGLE_AI_API_KEY=AIzaSy... MISTRAL_API_KEY=... XAI_API_KEY=... PERPLEXITY_API_KEY=pplx-... GROQ_API_KEY=gsk_... # Local Models OLLAMA_BASE_URL=http://localhost:11434

Optional Features

# HeyGen Avatars HEYGEN_API_KEY=... # Monitoring SENTRY_DSN=https://... NEXT_PUBLIC_POSTHOG_KEY=phc_... # Security ENCRYPTION_KEY=... # 32-byte hex NEXTAUTH_SECRET=...

Model Configuration

Available Models

ProviderModel IDContextCost/1M tokens
OpenAIgpt-4o128K$5-15
OpenAIgpt-4o-mini128K$0.15-0.60
Anthropicclaude-3-opus200K$15-75
Googlegemini-1.5-pro1M$3.50-10.50
Mistralmistral-large32K$2-6

Supabase Schema

Key Tables

-- Users table users ( id UUID PRIMARY KEY, email TEXT, created_at TIMESTAMP, daily_message_count INT DEFAULT 0, is_pro BOOLEAN DEFAULT false ) -- Conversations table conversations ( id UUID PRIMARY KEY, user_id UUID REFERENCES users(id), title TEXT, model TEXT DEFAULT 'gpt-4o', created_at TIMESTAMP ) -- Messages table messages ( id UUID PRIMARY KEY, conversation_id UUID REFERENCES conversations(id), role TEXT, -- 'user' or 'assistant' content TEXT, created_at TIMESTAMP )

Component Import Paths

// UI Components import { Button } from '@/components/ui/button' import { Card } from '@/components/ui/card' import { Input } from '@/components/ui/input' // Chat Components import { ChatInterface } from '@/components/chat/chat-interface' import { MessageList } from '@/components/chat/message-list' // AI Services import { streamText } from 'ai' import { openai } from '@ai-sdk/openai' import { anthropic } from '@ai-sdk/anthropic' // Supabase import { createClient } from '@/lib/supabase/client' import { createServerClient } from '@/lib/supabase/server'

Useful Patterns

Stream AI Response

const result = await streamText({ model: openai('gpt-4o'), messages, temperature: 0.7, maxTokens: 4096 }); return result.toDataStreamResponse();

Switch Models Mid-Conversation

const model = getModel(modelId); // Dynamic model selection const result = await streamText({ model, messages: conversation.messages });

Handle Rate Limits

try { const response = await streamText({...}); } catch (error) { if (error.code === 'rate_limit_exceeded') { // Fallback to different model const response = await streamText({ model: openai('gpt-4o-mini'), ... }); } }

Keyboard Shortcuts

ShortcutActionContext
Cmd/Ctrl + KOpen command paletteGlobal
Cmd/Ctrl + EnterSend messageChat input
Cmd/Ctrl + /Toggle sidebarGlobal
EscStop generationDuring streaming
Cmd/Ctrl + NNew conversationChat interface

Debug Commands

# Check API connectivity curl -H "Authorization: Bearer $OPENAI_API_KEY" \ https://api.openai.com/v1/models/gpt-4o # Test Supabase connection npx supabase status # View Vercel logs vercel logs --follow # Check TypeScript errors npx tsc --noEmit # Analyze bundle size pnpm analyze

Error Codes

CodeMeaningSolution
401Invalid API keyCheck environment variables
429Rate limit exceededWait or upgrade plan
500Server errorCheck logs, restart server
503Model unavailableTry different model
Last updated on