Troubleshooting & FAQ
This guide helps you diagnose and resolve common issues with Earna AI Console, including AI SDK v5 compatibility, message persistence, and TypeScript strict mode issues.
Quick Diagnostics
System Health Check
# Check GPT-4o API connectivity
curl https://api.openai.com/v1/models/gpt-4o \
-H "Authorization: Bearer \$OPENAI_API_KEY"
# Check Supabase database connection
curl https://YOUR_PROJECT.supabase.co/rest/v1/ \
-H "apikey: \$NEXT_PUBLIC_SUPABASE_ANON_KEY" \
-H "Authorization: Bearer \$NEXT_PUBLIC_SUPABASE_ANON_KEY"
# Check Vercel deployment status
curl https://api.vercel.com/v6/deployments/YOUR_DEPLOYMENT
Performance Monitoring
import { openai } from '@ai-sdk/openai';
import { createClient } from '@supabase/supabase-js';
export async function healthCheck() {
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
);
// Check GPT-4o API
const startTime = Date.now();
try {
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'test' }],
max_tokens: 10
});
const latency = Date.now() - startTime;
console.log(`GPT-4o Response Time: ${latency}ms`);
} catch (error) {
console.error('GPT-4o API Error:', error);
}
// Check Supabase connection
try {
const { data, error } = await supabase.from('users').select('count').single();
if (error) throw error;
console.log('Supabase connection: OK');
} catch (error) {
console.error('Supabase Error:', error);
}
// Check alternative models
const models = ['claude-3-opus', 'gemini-1.5-pro'];
for (const model of models) {
const available = await checkModelAvailability(model);
console.log(`${model}: ${available ? 'Available' : 'Unavailable'}`);
}
}
Common Issues
AI SDK v5 Issues
Message Persistence
Message Persistence After Reload
Problem: Chat messages don’t persist after page reload
Symptoms:
- Messages appear in sidebar history
- Database contains messages
- Main chat window is empty after refresh
Solution: Synchronize MessagesProvider with useChat hook
// app/components/chat/use-chat-core.ts
export function useChatCore({ initialMessages, ... }) {
const hasSyncedInitialMessagesRef = useRef(false)
// Critical: Sync messages from database
useEffect(() => {
if (initialMessages.length > 0 &&
!hasSyncedInitialMessagesRef.current &&
status === "ready") {
// Convert Message[] to UIMessage[]
const uiMessages = initialMessages.map(msg => ({
id: msg.id,
role: msg.role,
parts: [{ type: 'text', text: msg.content }]
}))
setMessages(uiMessages)
hasSyncedInitialMessagesRef.current = true
}
}, [initialMessages, messages.length, setMessages, status])
}
Additional Checks:
// Verify messages are loaded from database
console.log('Messages from DB:', initialMessages.length)
console.log('Messages in UI:', messages.length)
Important: Always run pnpm type-check
after making changes to ensure TypeScript compatibility.
GPT-4o API Problems
API Errors
GPT-4o API Errors
Problem: Authentication failed
{
"error": {
"message": "Incorrect API key provided",
"type": "invalid_request_error",
"code": "invalid_api_key"
}
}
Solutions:
-
Verify API Key
// Check environment variable console.log('API Key exists:', !!process.env.OPENAI_API_KEY); console.log('Key prefix:', process.env.OPENAI_API_KEY?.substring(0, 7)); // Ensure key is properly formatted const apiKey = process.env.OPENAI_API_KEY?.trim();
-
Update Environment Variables
# Local development echo "OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxx" >> .env.local # Vercel production vercel env add OPENAI_API_KEY production
-
Check API Key Permissions
// Test API key with minimal request import OpenAI from 'openai'; const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); try { const test = await openai.chat.completions.create({ model: 'gpt-4o', messages: [{ role: 'user', content: 'Hi' }], max_tokens: 10 }); console.log('API key valid'); } catch (error) { console.error('API key invalid:', error); }
Problem: Model not available
{
"error": {
"message": "The model `gpt-4o` does not exist",
"type": "invalid_request_error"
}
}
Solutions:
-
Check Model Access
// List available models const models = await openai.models.list(); const hasGPT4o = models.data.some(m => m.id === 'gpt-4o'); if (!hasGPT4o) { console.log('GPT-4o not available, using fallback'); // Fall back to gpt-4-turbo or gpt-3.5-turbo }
-
Use Vercel AI SDK with Fallback
import { openai } from '@ai-sdk/openai'; import { anthropic } from '@ai-sdk/anthropic'; async function getAIResponse(messages: any[]) { try { // Try GPT-4o first return await streamText({ model: openai('gpt-4o'), messages }); } catch (error) { // Fallback to Claude return await streamText({ model: anthropic('claude-3-opus-20240229'), messages }); } }
Supabase Integration Issues
Database Connection Problems
Problem: Supabase connection timeout
Error: Connection to database timed out after 5000ms
Solutions:
-
Configure Connection Pool
import { createClient } from '@supabase/supabase-js'; const supabase = createClient( process.env.NEXT_PUBLIC_SUPABASE_URL!, process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!, { auth: { persistSession: true, autoRefreshToken: true }, db: { schema: 'public' }, global: { headers: { 'x-connection-pool': 'true' } } } );
-
Implement Connection Retry
async function withSupabaseRetry<T>( operation: () => Promise<T>, maxRetries = 3 ): Promise<T> { for (let i = 0; i < maxRetries; i++) { try { return await operation(); } catch (error: any) { if (i === maxRetries - 1) throw error; if (error.message?.includes('timeout')) { await new Promise(resolve => setTimeout(resolve, 1000 * (i + 1))); } else { throw error; } } } throw new Error('Max retries exceeded'); }
Realtime Subscription Issues
Problem: Realtime updates not working
Error: Realtime subscription failed
Solutions:
-
Setup Realtime Properly
// Enable realtime for table in Supabase dashboard first const channel = supabase .channel('messages') .on( 'postgres_changes', { event: 'INSERT', schema: 'public', table: 'messages' }, (payload) => { console.log('New message:', payload.new); } ) .subscribe((status) => { if (status === 'SUBSCRIBED') { console.log('Realtime connected'); } }); // Clean up subscription return () => { supabase.removeChannel(channel); };
-
Handle Reconnection
class RealtimeManager { private channels: Map<string, any> = new Map(); subscribeWithReconnect( channelName: string, callback: (payload: any) => void ) { let retries = 0; const subscribe = () => { const channel = supabase .channel(channelName) .on('postgres_changes', { event: '*', schema: 'public' }, callback) .subscribe((status) => { if (status === 'SUBSCRIBED') { retries = 0; } else if (status === 'CHANNEL_ERROR') { // Retry with exponential backoff const delay = Math.min(1000 * Math.pow(2, retries), 30000); setTimeout(subscribe, delay); retries++; } }); this.channels.set(channelName, channel); }; subscribe(); } }
Storage Upload Issues
Problem: File upload failed
Error: new row violates row-level security policy
Solutions:
-
Configure Storage Policies
-- In Supabase SQL Editor CREATE POLICY "Users can upload files" ON storage.objects FOR INSERT WITH CHECK ( bucket_id = 'chat-attachments' AND auth.uid()::text = (storage.foldername(name))[1] ); CREATE POLICY "Users can view own files" ON storage.objects FOR SELECT USING ( bucket_id = 'chat-attachments' AND auth.uid()::text = (storage.foldername(name))[1] );
-
Handle File Uploads
async function uploadFile(file: File, userId: string) { const fileName = `${userId}/${Date.now()}-${file.name}`; const { data, error } = await supabase.storage .from('chat-attachments') .upload(fileName, file, { cacheControl: '3600', upsert: false }); if (error) { console.error('Upload error:', error); throw error; } // Get public URL const { data: { publicUrl } } = supabase.storage .from('chat-attachments') .getPublicUrl(fileName); return publicUrl; }
Alternative Model Issues
Claude Integration Problems
Problem: Claude API not responding
{
"error": {
"type": "api_error",
"message": "Claude API unavailable"
}
}
Solution: Implement fallback to GPT-4o
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
async function getAIResponseWithFallback(messages: any[]) {
const models = [
{ provider: anthropic('claude-3-opus-20240229'), name: 'Claude' },
{ provider: openai('gpt-4o'), name: 'GPT-4o' },
{ provider: google('gemini-1.5-pro'), name: 'Gemini' }
];
for (const { provider, name } of models) {
try {
console.log(`Trying ${name}...`);
return await streamText({
model: provider,
messages
});
} catch (error) {
console.error(`${name} failed:`, error);
continue;
}
}
throw new Error('All AI models unavailable');
}
HeyGen Avatar Issues
Problem: Avatar session creation failed
Error: Failed to create HeyGen session
Solutions:
- Validate HeyGen Configuration
async function createAvatarSession() { try { const response = await fetch('https://api.heygen.com/v1/streaming.new', { method: 'POST', headers: { 'X-Api-Key': process.env.HEYGEN_API_KEY!, 'Content-Type': 'application/json' }, body: JSON.stringify({ avatar_id: 'josh_lite3_20230714', voice: { voice_id: 'en-US-BrianNeural' } }) }); if (!response.ok) { throw new Error(`HeyGen error: ${response.statusText}`); } return await response.json(); } catch (error) { console.error('Avatar session error:', error); // Disable avatar feature gracefully return null; } }
Performance Optimization
Slow Response Times
Problem: Chat responses taking too long
Warning: Response time exceeds 3 seconds
Solutions:
-
Implement Response Caching
import { Redis } from '@upstash/redis'; const redis = new Redis({ url: process.env.UPSTASH_REDIS_REST_URL!, token: process.env.UPSTASH_REDIS_REST_TOKEN! }); async function getCachedResponse( prompt: string, model: string ): Promise<any> { const cacheKey = `response:${model}:${prompt.slice(0, 100)}`; // Check cache const cached = await redis.get(cacheKey); if (cached) { console.log('Cache hit'); return cached; } // Generate new response const response = await generateResponse(prompt, model); // Cache for 5 minutes await redis.setex(cacheKey, 300, response); return response; }
-
Optimize Database Queries
// Use select to limit fields const messages = await supabase .from('messages') .select('id, content, role, created_at') .eq('chat_id', chatId) .order('created_at', { ascending: false }) .limit(50); // Use RPC for complex queries const { data } = await supabase .rpc('get_chat_with_messages', { chat_id: chatId, message_limit: 50 });
Frequently Asked Questions
General Questions
Q: What AI models does Earna AI Console support?
A: Earna AI Console supports:
- Primary: GPT-4o (OpenAI) - Advanced reasoning, vision, and tools
- Alternatives: Claude 3 Opus, Gemini 1.5 Pro, Mistral Large
- Local: Ollama integration for privacy-focused deployments
- Specialized: GPT-4o Realtime for voice, HeyGen for avatars
Q: What are the daily message limits?
A: Message limits by subscription tier:
- Free: 10 messages/day
- Pro: 100 messages/day
- Enterprise: Unlimited messages
- Limits reset at midnight UTC
Q: How do I reduce GPT-4o costs?
A: Cost optimization strategies:
// 1. Use streaming to show progress
const stream = await streamText({
model: openai('gpt-4o'),
messages,
temperature: 0.7, // Lower temperature = more focused
maxTokens: 1000 // Limit response length
});
// 2. Cache common responses
const cachedResponse = await checkCache(prompt);
if (cachedResponse) return cachedResponse;
// 3. Use cheaper models for simple tasks
const model = isComplexQuery(prompt) ? 'gpt-4o' : 'gpt-3.5-turbo';
// 4. Implement usage tracking
await trackUsage(userId, tokens, cost);
Technical Questions
Q: How do I handle network interruptions?
A: Implement robust error handling:
// Automatic retry with exponential backoff
async function reliableRequest(fn: () => Promise<any>) {
const maxRetries = 3;
let lastError;
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error) {
lastError = error;
const delay = Math.min(1000 * Math.pow(2, i), 10000);
await new Promise(r => setTimeout(r, delay));
}
}
throw lastError;
}
// Usage
const response = await reliableRequest(() =>
openai.chat.completions.create({
model: 'gpt-4o',
messages
})
);
Q: How do I optimize for mobile devices?
A: Mobile optimization techniques:
// 1. Reduce payload size
const mobileMessages = messages.slice(-10); // Limit history
// 2. Use progressive loading
const [initialData, setInitialData] = useState(null);
const [fullData, setFullData] = useState(null);
useEffect(() => {
// Load essential data first
loadInitialData().then(setInitialData);
// Load rest in background
loadFullData().then(setFullData);
}, []);
// 3. Implement offline support
if (!navigator.onLine) {
return getCachedData();
}
Q: How do I debug Supabase RLS policies?
A: Debug Row Level Security issues:
-- Check current user
SELECT auth.uid();
-- Test RLS policies
SET LOCAL ROLE authenticated;
SET LOCAL auth.uid TO 'user-uuid-here';
-- Now test your queries
SELECT * FROM messages WHERE chat_id = 'test';
-- Check policy definitions
SELECT * FROM pg_policies WHERE tablename = 'messages';
Deployment Questions
Q: How do I deploy to Vercel?
A: Deploy with proper configuration:
# Install Vercel CLI
pnpm add -g vercel
# Set environment variables
vercel env add OPENAI_API_KEY production
vercel env add NEXT_PUBLIC_SUPABASE_URL production
vercel env add NEXT_PUBLIC_SUPABASE_ANON_KEY production
# Deploy
vercel --prod
Q: How do I monitor production errors?
A: Set up error monitoring:
// Install Sentry
pnpm add @sentry/nextjs
// Configure in next.config.js
const { withSentryConfig } = require('@sentry/nextjs');
module.exports = withSentryConfig(
nextConfig,
{
silent: true,
org: 'your-org',
project: 'earna-console'
}
);
// Capture errors
import * as Sentry from '@sentry/nextjs';
try {
// Your code
} catch (error) {
Sentry.captureException(error);
throw error;
}
Getting Help
Support Channels
- Documentation: docs.earna.shÂ
- GitHub Issues: github.com/earna-ai/console/issuesÂ
- Discord Community: discord.gg/earnaÂ
- Email Support: support@earna.sh
Before Contacting Support
Please include:
-
System Information
# Get system info npx envinfo --system --npmPackages '{@ai-sdk/*,@supabase/*,next}' --binaries
-
Error Logs
// Enable debug logging localStorage.setItem('debug', 'ai:*,supabase:*'); // Capture console output console.log('Error context:', { timestamp: new Date().toISOString(), error: error.message, stack: error.stack, model: currentModel, userId: user?.id });
-
Network Trace
- Open browser DevTools
- Go to Network tab
- Reproduce the issue
- Export as HAR file
For real-time assistance, join our Discord community where you can get help from both the development team and experienced users.