Skip to Content
ConsoleCore FeaturesAI Integration

Multi-Model AI Support with Vercel AI SDK v5

Earna AI Console leverages Vercel AI SDK v5 - a powerful TypeScript toolkit that revolutionizes AI application development. With GPT-4o as the primary model and support for 8+ AI providers, v5 brings significant improvements in streaming UI, structured data generation, and developer experience.

Why AI SDK v5?

AI SDK v5 represents a major leap forward in building AI-powered applications with enhanced type safety, streaming capabilities, and framework flexibility.

Key Advantages of v5

  • Unified API: Single consistent interface across 20+ AI providers
  • Streaming UI Components: Real-time UI updates with built-in React hooks
  • Structured Outputs: Type-safe object generation with Zod schemas
  • Enhanced Tool Calling: Multi-step tool execution with automatic retry
  • Framework Agnostic: Works with React, Next.js, Vue, Svelte, and Node.js
  • Improved DX: Better TypeScript support and error handling
  • Telemetry & Observability: Built-in monitoring and debugging

Core v5 Features

The console implements a sophisticated multi-model architecture that leverages v5’s capabilities:

  • Text Generation: generateText and streamText for flexible content creation
  • Object Generation: generateObject and streamObject for structured data
  • Tool Calling: Advanced function calling with automatic execution
  • Streaming UI: Real-time updates with useChat, useCompletion, and useObject
  • Middleware System: Request/response interceptors for custom logic
  • Provider Abstraction: Seamless switching between AI models

Supported Providers

Earna AI Console integrates with 22+ AI providers through AI SDK v5, each offering unique capabilities:

Provider Capabilities Quick Reference

Provider TypeExamplesKey Features
Full-FeaturedOpenAI, Anthropic, xAI, AzureAll features: Vision, Image Gen, Tools, Streaming
High-PerformanceGroq, Together AI, FireworksUltra-fast inference, cost-effective
Image SpecialistsFal AI, Replicate, Stability AIAdvanced image generation models
EnterpriseAzure, Amazon Bedrock, Vertex AISecurity, compliance, private deployments
Search-EnhancedPerplexity, TavilyBuilt-in web search and real-time data
Local/Self-HostedOllama, LMStudio, BasetenRun models locally or custom deployments

Complete Provider Guide: See our comprehensive AI SDK Provider Ecosystem guide for detailed capabilities, pricing, and implementation examples for all 22+ providers.

// Full-featured providers import { openai } from '@ai-sdk/openai'; // GPT-4o, DALL-E, Vision import { anthropic } from '@ai-sdk/anthropic'; // Claude 3, 200K context import { xai } from '@ai-sdk/xai'; // Grok 3, latest models // High-performance providers import { groq } from '@ai-sdk/groq'; // 800+ tokens/sec import { fireworks } from '@ai-sdk/fireworks'; // Fast inference + image gen // Specialized providers import { google } from '@ai-sdk/google'; // Gemini, 1M+ context import { mistral } from '@ai-sdk/mistral'; // European AI, multilingual import { perplexity } from '@ai-sdk/perplexity'; // Search-enhanced responses

Installation

# Core Vercel AI SDK pnpm add ai # Primary providers (recommended) pnpm add @ai-sdk/openai # GPT-4o, DALL-E, Whisper pnpm add @ai-sdk/anthropic # Claude 3, 200K context pnpm add @ai-sdk/google # Gemini Pro, 1M+ context pnpm add @ai-sdk/xai # Grok 3, latest models # High-performance providers pnpm add @ai-sdk/groq # 800+ tokens/sec pnpm add @ai-sdk/fireworks # Fast inference + image gen pnpm add @ai-sdk/togetherai # Cost-effective, fast # Enterprise providers pnpm add @ai-sdk/azure # Enterprise OpenAI pnpm add @ai-sdk/amazon-bedrock # AWS managed models pnpm add @ai-sdk/google-vertex # Google Cloud AI # Specialized providers pnpm add @ai-sdk/mistral # European AI, multilingual pnpm add @ai-sdk/cohere # Enterprise NLP pnpm add @ai-sdk/perplexity # Search-enhanced pnpm add @ai-sdk/fal # Image generation pnpm add @ai-sdk/replicate # Custom models # Local/Self-hosted pnpm add @ai-sdk/ollama # Run models locally pnpm add @ai-sdk/lmstudio # Local LLM studio # Aggregators (access to 100+ models) pnpm add @openrouter/ai-sdk-provider

Basic Configuration

Provider Setup

// lib/models/index.ts import { openai } from '@ai-sdk/openai'; import { anthropic } from '@ai-sdk/anthropic'; import { google } from '@ai-sdk/google'; import { mistral } from '@ai-sdk/mistral'; import { xai } from '@ai-sdk/xai'; import { perplexity } from '@ai-sdk/perplexity'; import { createOpenRouter } from '@openrouter/ai-sdk-provider'; // Initialize providers export const providers = { openai: openai, anthropic: anthropic, google: google, mistral: mistral, xai: xai, perplexity: perplexity, openrouter: createOpenRouter({ apiKey: process.env.OPENROUTER_API_KEY }) }; // Get provider for a model export function getProvider(modelId: string) { const [provider] = modelId.split(':'); return providers[provider]; }

Streaming Responses

The console uses Vercel AI SDK’s streaming capabilities for real-time responses:

// app/api/chat/route.ts import { streamText } from 'ai'; import { getProvider } from '@/lib/models'; export async function POST(req: Request) { const { messages, model: modelId } = await req.json(); // Get the appropriate provider const provider = getProvider(modelId); const model = provider(modelId); // Stream the response const result = await streamText({ model, messages, temperature: 0.7, maxTokens: 4096, onFinish: async ({ text, usage }) => { // Save to database await saveMessage({ content: text, model: modelId, tokens: usage.totalTokens }); } }); // Return streaming response return result.toDataStreamResponse(); }

New in v5: Enhanced Features

Structured Data Generation

AI SDK v5 introduces powerful structured output capabilities with full TypeScript support:

// app/api/extract/route.ts import { generateObject } from 'ai'; import { z } from 'zod'; // Define your schema with Zod const RecipeSchema = z.object({ name: z.string(), ingredients: z.array(z.object({ name: z.string(), amount: z.string() })), steps: z.array(z.string()), cookTime: z.number() }); export async function POST(req: Request) { const { prompt } = await req.json(); // Generate structured data with type safety const { object } = await generateObject({ model: openai('gpt-4o'), schema: RecipeSchema, prompt: `Extract recipe information from: ${prompt}` }); // object is fully typed! return Response.json(object); }

Streaming Structured Objects

Stream complex objects with real-time updates:

// Stream structured data for progressive rendering import { streamObject } from 'ai'; const { partialObjectStream } = await streamObject({ model: openai('gpt-4o'), schema: z.object({ analysis: z.string(), suggestions: z.array(z.string()), confidence: z.number() }), prompt: 'Analyze this business data...' }); // Consume the stream with partial updates for await (const partialObject of partialObjectStream) { // Update UI progressively as data arrives console.log(partialObject); }

Advanced Tool Calling

V5 significantly improves tool calling with automatic execution and multi-step workflows:

// Define tools with Zod schemas import { tool } from 'ai'; const weatherTool = tool({ description: 'Get weather information', parameters: z.object({ location: z.string(), unit: z.enum(['celsius', 'fahrenheit']) }), execute: async ({ location, unit }) => { const weather = await fetchWeather(location, unit); return weather; } }); const calculatorTool = tool({ description: 'Perform calculations', parameters: z.object({ expression: z.string() }), execute: async ({ expression }) => { return eval(expression); // Simplified example } }); // Use tools in conversation const result = await generateText({ model: openai('gpt-4o'), messages, tools: { weather: weatherTool, calculator: calculatorTool }, toolChoice: 'auto', // Let the model decide maxToolRoundtrips: 5 // Allow multiple tool calls });

Streaming UI Components

V5 provides powerful hooks for building responsive AI interfaces:

// useObject hook for streaming structured data import { useObject } from 'ai/react'; function DataExtractor() { const { object, submit, isLoading, error } = useObject({ api: '/api/extract', schema: z.object({ summary: z.string(), keyPoints: z.array(z.string()), sentiment: z.enum(['positive', 'neutral', 'negative']) }) }); return ( <div> {/* Real-time updates as object streams */} {object?.summary && <p>{object.summary}</p>} {object?.keyPoints?.map((point, i) => ( <li key={i}>{point}</li> ))} </div> ); }

Middleware System

Add custom logic to all AI requests:

// lib/ai/middleware.ts import { experimental_wrapLanguageModel } from 'ai'; const modelWithMiddleware = experimental_wrapLanguageModel({ model: openai('gpt-4o'), middleware: { // Pre-process requests transformParams: async (params) => { return { ...params, messages: addSystemPrompt(params.messages) }; }, // Post-process responses wrapGenerate: async (generateFn) => { const result = await generateFn(); await logUsage(result); return result; } } });

Client-Side Integration

Using useChat Hook

// components/chat/chat.tsx import { useChat } from 'ai/react'; export function Chat() { const { messages, input, handleInputChange, handleSubmit, isLoading, error } = useChat({ api: '/api/chat', body: { model: selectedModel }, onError: (error) => { console.error('Chat error:', error); } }); return ( <div> {messages.map((message) => ( <Message key={message.id} message={message} /> ))} <form onSubmit={handleSubmit}> <input value={input} onChange={handleInputChange} placeholder="Type a message..." disabled={isLoading} /> </form> </div> ); }

Advanced Features

Tool Calling

// lib/tools/ai-tools.ts import { tool } from 'ai'; import { z } from 'zod'; export const aiTools = { searchWeb: tool({ description: 'Search the web for information', parameters: z.object({ query: z.string(), limit: z.number().optional() }), execute: async ({ query, limit = 5 }) => { // Implement web search logic return { results: [], source: 'web' }; } }), analyzeData: tool({ description: 'Analyze data and provide insights', parameters: z.object({ data: z.array(z.any()), analysisType: z.enum(['summary', 'trend', 'comparison']) }), execute: async ({ data, analysisType }) => { // Implement data analysis logic return { analysis: 'Data analysis result' }; } }) }; // Use in chat with GPT-4o (primary model) const result = await streamText({ model: openai('gpt-4o'), messages, tools: aiTools, toolChoice: 'auto' });

Vision Capabilities

// Handle image uploads export async function analyzeImage(imageUrl: string, prompt: string) { const result = await streamText({ model: openai('gpt-4o'), messages: [ { role: 'user', content: [ { type: 'text', text: prompt }, { type: 'image', image: imageUrl } ] } ] }); return result; }

Error Handling

// lib/ai/error-handler.ts export async function withRetry<T>( fn: () => Promise<T>, retries = 3, delay = 1000 ): Promise<T> { try { return await fn(); } catch (error) { if (retries === 0) throw error; // Handle rate limits if (error.status === 429) { const retryAfter = error.headers?.['retry-after'] || delay; await new Promise(resolve => setTimeout(resolve, retryAfter)); return withRetry(fn, retries - 1, delay * 2); } // Handle other errors throw error; } } // Usage const response = await withRetry(() => streamText({ model: openai('gpt-4o'), messages }) );

Model-Specific Features

Each provider has unique capabilities. The console automatically adjusts features based on the selected model.

OpenAI o1 Models

  • Reasoning tokens for complex problems
  • No streaming support (full response only)
  • Higher latency but better accuracy

Anthropic Claude (Alternative)

  • Claude 3 Opus: 200K context window
  • Superior at following complex instructions
  • Alternative to GPT-4o for creative and analytical tasks

Google Gemini

  • 1M+ context window
  • Native multimodal understanding
  • Excellent for document analysis

Perplexity Sonar

  • Built-in web search
  • Real-time information
  • Citation support

Performance Optimization

Caching Responses

// lib/cache/ai-cache.ts import { kv } from '@vercel/kv'; export async function getCachedResponse(key: string) { return await kv.get(key); } export async function setCachedResponse( key: string, response: any, ttl = 3600 ) { await kv.set(key, response, { ex: ttl }); }

Load Balancing

// lib/ai/load-balancer.ts export class ModelLoadBalancer { private modelPool = [ 'gpt-4o', // Primary model 'claude-3-opus-20240229', // Alternative 1 'gemini-1.5-pro' // Alternative 2 ]; private currentIndex = 0; getNextModel(): string { const model = this.modelPool[this.currentIndex]; this.currentIndex = (this.currentIndex + 1) % this.modelPool.length; return model; } }

Monitoring & Analytics

// lib/analytics/ai-metrics.ts export async function trackAIUsage({ model, tokens, latency, success }: AIMetrics) { await fetch('/api/analytics', { method: 'POST', body: JSON.stringify({ event: 'ai_usage', properties: { model, tokens, latency, success, timestamp: new Date().toISOString() } }) }); }

V5 Migration Benefits

Upgrading to AI SDK v5 provides immediate benefits in performance, type safety, and developer experience.

Performance Improvements

  • Faster streaming: Optimized data transfer with partial JSON streaming
  • Reduced latency: Better connection pooling and request batching
  • Smaller bundle size: Tree-shakeable exports and modular architecture

Developer Experience

  • Type Safety: Full TypeScript support with inferred types from Zod schemas
  • Better Errors: Detailed error messages with actionable fixes
  • Simplified API: Consistent patterns across all functions
  • Framework Support: First-class support for React Server Components

New Capabilities

  • Structured Outputs: Generate type-safe objects and arrays
  • Multi-modal Support: Handle text, images, and files seamlessly
  • Tool Chaining: Complex multi-step tool execution workflows
  • Observability: Built-in telemetry and debugging tools

Best Practices with v5

  1. Use structured outputs for data extraction tasks
  2. Leverage streaming UI hooks for responsive interfaces
  3. Implement middleware for cross-cutting concerns
  4. Use generateText for batch operations and streamText for interactive UIs
  5. Define Zod schemas for all structured data generation
  6. Enable telemetry in production for monitoring
  7. Use tool calling for complex workflows
  8. Implement proper error boundaries with v5’s error types
  9. Cache with unstable_cache for expensive operations
  10. Monitor usage with built-in metrics

Next Steps

Last updated on