Skip to Content
ConsoleDevelopmentAI SDK v5 Implementation

AI SDK v5 Implementation

Earna AI Console is built on Vercel AI SDK v5, the latest version of the AI SDK that provides robust TypeScript support, enhanced streaming capabilities, and improved message handling.

Version: We’re running AI SDK v5.0.23 with full TypeScript strict mode compatibility.

Core Architecture

Message Types

The console uses two primary message types from AI SDK v5:

UIMessage

Used for chat interface interactions from @ai-sdk/react

Message

Used for data persistence from @ai-sdk/ui-utils

UIMessage

import { UIMessage } from '@ai-sdk/react' interface UIMessage { id: string role: 'user' | 'assistant' | 'system' parts: UIMessagePart[] } // Example usage in components const message: UIMessage = { id: 'msg-123', role: 'user', parts: [{ type: 'text', text: 'Hello, how can you help?' }] }

Message

import { Message } from '@ai-sdk/ui-utils' interface Message { id: string content: string role: 'user' | 'assistant' | 'system' | 'data' createdAt?: Date experimental_attachments?: Attachment[] }

Provider Configuration

Our provider system supports multiple AI models through a unified interface:

// lib/openproviders/index.ts import type { LanguageModel } from "ai" import { openai } from "@ai-sdk/openai" import { anthropic } from "@ai-sdk/anthropic" import { google } from "@ai-sdk/google" export function openproviders<T extends SupportedModel>( modelId: T, _settings?: OpenProvidersOptions<T>, apiKey?: string ): LanguageModel { // Provider selection based on model ID if (modelId.startsWith("gpt") || modelId.startsWith("o1")) { return openai(modelId as OpenAIModel) } if (modelId.startsWith("claude")) { return anthropic(modelId as AnthropicModel) } if (modelId.startsWith("gemini")) { return google(modelId as GoogleModel) } // ... other providers }

Message Handling

Message Persistence

We implement a robust message persistence system that syncs between the UI and database:

// app/components/chat/use-chat-core.ts export function useChatCore({ initialMessages, ... }) { const hasSyncedInitialMessagesRef = useRef(false) // Sync messages from database on load useEffect(() => { if (initialMessages.length > 0 && !hasSyncedInitialMessagesRef.current && status === "ready") { const uiMessages = initialMessages.map(convertToUIMessage) setMessages(uiMessages) hasSyncedInitialMessagesRef.current = true } }, [initialMessages, messages.length, setMessages, status]) // Handle sending messages const sendMessage = async (text: string) => { const response = await sendMessage({ text }, options) return response } }

useChat Hook

The primary hook for chat functionality with full TypeScript support:

import { useChat } from '@ai-sdk/react' const { messages, input, handleSubmit, status, stop, setMessages, setInput, } = useChat({ id: chatId, initialMessages: [], onFinish: async (message) => { await cacheAndAddMessage(message) }, onError: (error) => { console.error('Chat error:', error) toast.error('Failed to send message') }, })

TypeScript Configuration

Strict Mode Settings

Our TypeScript configuration enforces strict type checking:

{ "compilerOptions": { "strict": true, "strictNullChecks": true, "strictFunctionTypes": true, "strictBindCallApply": true, "strictPropertyInitialization": true, "noImplicitThis": true, "alwaysStrict": true, "noImplicitAny": true } }

Type Assertions

Where necessary, we use type assertions for AI SDK compatibility:

// Required for some AI SDK v5 interfaces const chatHelpers = useChat({ initialMessages: [], onFinish: cacheAndAddMessage as any, } as any) // Accessing extended properties const input = (chatHelpers as any).input || "" const isLoading = (chatHelpers as any).isLoading

Multi-Model Chat

Our multi-chat implementation supports simultaneous conversations with multiple AI models:

// app/components/multi-chat/use-multi-chat.ts export function useMultiChat(models: ModelConfig[]): ModelChat[] { const chatHooks = Array.from({ length: MAX_MODELS }, (_, index) => useChat({ onError: (error: any) => { const model = models[index] if (model) { toast.error(`Error with ${model.name}: ${error.message}`) } }, } as any) ) return models.map((model, index) => ({ model, messages: chatHooks[index].messages, isLoading: (chatHooks[index] as any).isLoading, append: (message: any, options?: any) => { return (chatHooks[index] as any).append(message, options) }, stop: chatHooks[index].stop, })) }

File Attachments

Support for file uploads and attachments in conversations:

interface Attachment { name: string contentType: string url: string } // Usage in chat const optimisticMessage = { id: optimisticId, content: input, role: "user" as const, createdAt: new Date(), experimental_attachments: files.length > 0 ? createOptimisticAttachments(files) : undefined, }

Streaming Responses

AI SDK v5 provides enhanced streaming capabilities:

// Real-time streaming status status: 'idle' | 'streaming' | 'ready' | 'error' // Handle streaming in UI {status === 'streaming' && <Loader />} {status === 'ready' && <Message content={message.content} />}

API Routes

Chat Endpoint

// app/api/chat/route.ts import { streamText } from 'ai' import { openproviders } from '@/lib/openproviders' export async function POST(req: Request) { const { messages, model, userId } = await req.json() const provider = openproviders(model) const result = streamText({ model: provider, messages, system: systemPrompt, }) return result.toDataStreamResponse() }

Performance Optimizations

Memoization

We extensively use memoization to prevent unnecessary re-renders:

const convertedMessages = useMemo( () => messages.map(msg => ({ ...msg, content: extractTextContent(msg), createdAt: msg.createdAt || new Date(), })), [messages] ) const conversationProps = useMemo( () => ({ messages: convertedMessages, status, onDelete: handleDelete, onEdit: handleEdit, }), [convertedMessages, status, handleDelete, handleEdit] )

Lazy Loading

Components are loaded on-demand:

const FeedbackWidget = dynamic( () => import('./feedback-widget').then(mod => mod.FeedbackWidget), { ssr: false } )

Error Handling

Comprehensive error handling throughout the application:

try { const response = await sendMessage({ text: input }, options) // Handle success } catch (error) { console.error('Failed to send message:', error) toast({ title: 'Failed to send message', description: 'Please try again.', status: 'error', }) // Clean up optimistic updates cleanupOptimisticAttachments(attachments) }

Best Practices

Type Safety

Always use proper TypeScript types and avoid any unless necessary for AI SDK compatibility

Message Sync

Ensure messages are properly synced between UI and database

Error States

Handle all error states gracefully with user feedback

Performance

Memoize computed values and use lazy loading where appropriate

Last updated on