Skip to Content
ConsoleSetup Guide

Getting Started

Get Earna AI Console up and running in under 5 minutes. This guide will help you set up the multi-model AI chat platform with GPT-4o as the primary model, plus support for Claude, Gemini and other leading AI providers.

Monorepo Structure: The console is part of a Turborepo-powered monorepo. You can run commands either from the monorepo root using pnpm turbo --filter=console or directly from the console directory.

Prerequisites

Before you begin, ensure you have:

  • Node.js 22+ and pnpm 9.14+
  • Git for version control
  • Supabase Account (free tier works)
  • AI Provider Keys (at least one):
    • OpenAI API key (required for GPT-4o primary model)
    • Anthropic API key (optional for Claude 3 Opus)
    • Google AI API key (optional for Gemini Pro)
    • Mistral, xAI, Perplexity (optional alternatives)
  • HeyGen API Key (optional, for interactive avatars)

The console uses GPT-4o as the primary model with support for 8+ AI providers through Vercel AI SDK v5. OpenAI API key is required for the main functionality.

Installation

Using pnpm

# Clone the repository git clone https://github.com/identity-wael/earna-ai.git cd earna-ai # Install dependencies for all workspaces pnpm install # Copy environment template cp .env.example .env.local # Add your Supabase and OpenAI keys to .env.local # NEXT_PUBLIC_SUPABASE_URL=your_supabase_url # NEXT_PUBLIC_SUPABASE_ANON_KEY=your_anon_key # OPENAI_API_KEY=your_openai_key # Start development server (from monorepo root) pnpm turbo dev --filter=console # Or from the console directory cd console && pnpm dev

Access the application at http://localhost:3000

First Steps

Test Multi-Model Chat

Try chatting with different AI models:

// The console uses Vercel AI SDK v5 // Models are configured in lib/models/index.ts import { openai } from '@ai-sdk/openai'; import { anthropic } from '@ai-sdk/anthropic'; import { google } from '@ai-sdk/google'; import { mistral } from '@ai-sdk/mistral'; import { xai } from '@ai-sdk/xai'; // Example: Stream a response from GPT-4o (primary model) const result = await streamText({ model: openai('gpt-4o'), messages: [ { role: 'user', content: 'Explain how to use this chat platform' } ], });

Enable Interactive Avatars

If you have a HeyGen API key, test the avatar feature:

  1. Click the Avatar button in the chat interface
  2. Select an avatar from the gallery
  3. Start a session and interact with voice + video
// Avatar session management in app/api/heygen/create-session/route.ts const response = await fetch('/api/heygen/create-session', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ avatarId: 'josh_lite3_20230714', voice: { voiceId: 'en-US-BrianNeural' } }) }); const { sessionId, sdp } = await response.json(); // Use WebRTC to establish connection

Try Voice Mode

With an OpenAI API key, enable real-time voice conversations:

  1. Click the Voice button in chat
  2. Allow microphone access
  3. Speak naturally - the AI responds in real-time
// Voice mode uses GPT-4o Realtime API // WebSocket connection in app/components/chat/voice-mode-realtime.tsx const ws = new WebSocket('wss://api.openai.com/v1/realtime'); ws.onopen = () => { ws.send(JSON.stringify({ type: 'session.update', session: { modalities: ['text', 'audio'], voice: 'alloy' } })); };

Upload and Analyze Files

Test file upload capabilities:

// File upload handling in app/components/chat-input/file-upload.tsx const handleFileUpload = async (file: File) => { // Upload to Supabase Storage const { data, error } = await supabase.storage .from('chat-files') .upload(`${userId}/${file.name}`, file); // Analyze with AI const response = await fetch('/api/chat', { method: 'POST', body: JSON.stringify({ message: 'Analyze this file', fileUrl: data.publicUrl }) }); };

Customize Your Experience

Explore settings to personalize:

  • Model Visibility: Show/hide specific AI models
  • Theme: Switch between light/dark/system themes
  • System Prompt: Set a custom personality for the AI
  • API Keys: Add your own keys for more providers

Core API Routes

The console includes these main API endpoints:

Chat API Routes

// app/api/chat/route.ts // Main chat endpoint supporting all AI providers export async function POST(req: Request) { const { messages, model } = await req.json(); // Get the AI provider and model const provider = getProviderFromModel(model); // Stream the response const result = await streamText({ model: provider(model), messages, onFinish: async ({ text }) => { // Save to database await saveMessage(chatId, text); } }); return result.toDataStreamResponse(); }
// app/api/models/route.ts // Get available models based on API keys export async function GET() { const models = await getAvailableModels(); return NextResponse.json(models); }

Testing Your Setup

Verify everything is working:

# From monorepo root (recommended) pnpm turbo build --filter=console pnpm turbo typecheck --filter=console pnpm turbo lint --filter=console # Or from console directory cd console pnpm build pnpm type-check pnpm lint pnpm start

What’s Next?

Now that your console is running:

  1. Learn AI Integration: Deep dive into Vercel AI SDK v5
  2. Set Up Avatars: Configure Interactive Avatars
  3. Enable Voice: Implement Voice Conversations
  4. Secure Your App: Configure Supabase RLS
  5. Deploy: Follow the Production Guide

You’re Ready! Your Earna AI Console is running with GPT-4o as the primary model, plus multi-model support, interactive avatars, and voice capabilities!

Troubleshooting

Common issues and solutions:

  • Supabase Connection Error: Check your URL and anon key are correct
  • AI Model Not Available: Ensure you’ve added the API key for that provider
  • Avatar Not Loading: Verify HeyGen API key and check browser WebRTC support
  • Voice Mode Silent: OpenAI API key required, check microphone permissions
  • File Upload Fails: Ensure Supabase storage bucket is created with proper policies

For more help:

Last updated on