AI SDK v5: Next Generation AI Development
Vercel AI SDK v5 represents a major evolution in building AI-powered applications. This guide explores the powerful features and improvements that make v5 the definitive toolkit for AI development.
Architecture Overview
Core Functions
Text Generation
generateText
generateText - Non-streaming text generation
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const { text, usage, finishReason } = await generateText({
model: openai('gpt-4o'),
messages: [
{ role: 'system', content: 'You are a helpful assistant' },
{ role: 'user', content: 'Explain quantum computing' }
],
temperature: 0.7,
maxTokens: 1000
});
console.log(text); // Complete response
console.log(usage); // Token usage metrics
Structured Output Generation
Game Changer: V5’s structured outputs ensure type-safe data extraction with automatic validation.
Object Generation
generateObject - Type-safe object generation
import { generateObject } from 'ai';
import { z } from 'zod';
const ProductSchema = z.object({
name: z.string().describe('Product name'),
price: z.number().describe('Price in USD'),
features: z.array(z.string()).describe('Key features'),
inStock: z.boolean(),
categories: z.array(z.enum(['electronics', 'clothing', 'food']))
});
const { object } = await generateObject({
model: openai('gpt-4o'),
schema: ProductSchema,
prompt: 'Extract product info from this description...'
});
// object is fully typed as z.infer<typeof ProductSchema>
console.log(object.name); // TypeScript knows this is a string
Advanced Tool Calling
V5’s tool calling system enables complex multi-step workflows with automatic execution:
import { generateText, tool } from 'ai';
import { z } from 'zod';
// Define reusable tools
const databaseTool = tool({
description: 'Query the database',
parameters: z.object({
query: z.string(),
table: z.enum(['users', 'products', 'orders'])
}),
execute: async ({ query, table }) => {
return await db.query(query, { table });
}
});
const emailTool = tool({
description: 'Send an email',
parameters: z.object({
to: z.string().email(),
subject: z.string(),
body: z.string()
}),
execute: async ({ to, subject, body }) => {
await sendEmail({ to, subject, body });
return { success: true, messageId: generateId() };
}
});
const calculationTool = tool({
description: 'Perform complex calculations',
parameters: z.object({
operation: z.enum(['sum', 'average', 'forecast']),
data: z.array(z.number())
}),
execute: async ({ operation, data }) => {
switch (operation) {
case 'sum': return data.reduce((a, b) => a + b, 0);
case 'average': return data.reduce((a, b) => a + b, 0) / data.length;
case 'forecast': return forecastNextValue(data);
}
}
});
// Use tools in complex workflows
const result = await generateText({
model: openai('gpt-4o'),
messages: [
{
role: 'user',
content: 'Find users who made purchases last month, calculate their average spending, and email them a summary'
}
],
tools: {
database: databaseTool,
email: emailTool,
calculate: calculationTool
},
toolChoice: 'auto',
maxToolRoundtrips: 5 // Allow multiple tool calls
});
UI Framework Integration
React Hooks
useChat
useChat - Full chat interface
import { useChat } from 'ai/react';
export function ChatInterface() {
const {
messages,
input,
handleInputChange,
handleSubmit,
isLoading,
error,
reload,
stop,
append,
setMessages
} = useChat({
api: '/api/chat',
initialMessages: [],
onResponse(response) {
// Handle response headers
},
onFinish(message) {
// Message complete
trackEvent('message_sent', { length: message.content.length });
},
onError(error) {
toast.error('Failed to send message');
}
});
return (
<div>
{messages.map(m => (
<div key={m.id} className={m.role}>
{m.content}
</div>
))}
<form onSubmit={handleSubmit}>
<input
value={input}
onChange={handleInputChange}
placeholder="Type a message..."
disabled={isLoading}
/>
<button type="submit">Send</button>
{isLoading && <button onClick={stop}>Stop</button>}
</form>
</div>
);
}
Middleware & Interceptors
Customize request/response handling:
import { experimental_wrapLanguageModel as wrapModel } from 'ai';
const enhancedModel = wrapModel({
model: openai('gpt-4o'),
middleware: {
// Pre-process all requests
transformParams: async (params) => {
return {
...params,
messages: [
{
role: 'system',
content: 'You are a helpful assistant. Always be concise.'
},
...params.messages
],
temperature: Math.min(params.temperature || 0.7, 0.9)
};
},
// Wrap generation with custom logic
wrapGenerate: async (generateFn, params) => {
const startTime = Date.now();
try {
const result = await generateFn();
// Log successful generation
await logGeneration({
model: 'gpt-4o',
duration: Date.now() - startTime,
tokens: result.usage?.totalTokens,
success: true
});
return result;
} catch (error) {
// Log errors
await logError({
model: 'gpt-4o',
error: error.message,
duration: Date.now() - startTime
});
throw error;
}
},
// Transform responses
transformResponse: async (response) => {
// Add metadata
response.experimental_metadata = {
processedAt: new Date().toISOString(),
version: 'v5'
};
return response;
}
}
});
Provider Ecosystem
OpenAI
GPT-4o, GPT-4, GPT-3.5, DALL-E, Whisper
Anthropic
Claude 3 Opus, Claude 3.5 Sonnet, Claude 3 Haiku
Gemini 1.5 Pro, Gemini 1.5 Flash, PaLM 2
Meta
Llama 3, Llama 2, Code Llama
Mistral
Mistral Large, Mixtral 8x7B, Mistral 7B
Cohere
Command R+, Command R, Embed v3
Perplexity
Sonar Large, Sonar Small, Online Models
20+ More
xAI, Together, Replicate, Hugging Face, and more
Performance Optimizations
Response Caching
import { unstable_cache } from 'next/cache';
const cachedGeneration = unstable_cache(
async (prompt: string) => {
const { text } = await generateText({
model: openai('gpt-4o'),
prompt
});
return text;
},
['ai-generation'],
{
revalidate: 3600, // Cache for 1 hour
tags: ['ai-cache']
}
);
Parallel Generation
// Generate multiple responses in parallel
const [summary, keywords, sentiment] = await Promise.all([
generateText({
model: openai('gpt-4o-mini'),
prompt: `Summarize: ${text}`
}),
generateObject({
model: openai('gpt-4o-mini'),
schema: z.array(z.string()),
prompt: `Extract keywords: ${text}`
}),
generateObject({
model: openai('gpt-4o-mini'),
schema: z.enum(['positive', 'negative', 'neutral']),
prompt: `Analyze sentiment: ${text}`
})
]);
Edge Runtime Support
// app/api/chat/route.ts
export const runtime = 'edge'; // Enable edge runtime
export async function POST(req: Request) {
const { messages } = await req.json();
// Runs on edge - faster, globally distributed
const result = await streamText({
model: openai('gpt-4o'),
messages
});
return result.toDataStreamResponse();
}
Observability & Debugging
Built-in Telemetry
import { telemetry } from 'ai';
telemetry.recordGeneration({
model: 'gpt-4o',
promptTokens: 150,
completionTokens: 500,
latency: 1234,
tags: {
userId: 'user123',
feature: 'chat'
}
});
Custom Logging
const result = await generateText({
model: openai('gpt-4o'),
messages,
experimental_telemetry: {
isEnabled: true,
functionId: 'chat-generation',
metadata: {
userId: session.userId,
sessionId: session.id
}
}
});
Migration from v4
Breaking Changes: V5 has significant API changes. Plan your migration carefully.
Key Changes
Feature | v4 | v5 |
---|---|---|
Import | import { OpenAIStream } from 'ai' | import { streamText } from 'ai' |
Streaming | OpenAIStream(response) | streamText({ model, messages }) |
Providers | Custom implementations | Unified provider packages |
Hooks | useChat({ api }) | Enhanced with more options |
Types | Basic | Full TypeScript with generics |
Tools | Function calling | Advanced tool system with Zod |
Objects | Manual parsing | generateObject with schemas |
Migration Example
// v4 (Old)
import { OpenAIStream, StreamingTextResponse } from 'ai';
export async function POST(req: Request) {
const { messages } = await req.json();
const response = await openai.chat.completions.create({
model: 'gpt-4',
stream: true,
messages
});
const stream = OpenAIStream(response);
return new StreamingTextResponse(stream);
}
// v5 (New)
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4o'),
messages
});
return result.toDataStreamResponse();
}
Resources
Ready to build? AI SDK v5 provides everything you need to create sophisticated AI applications with confidence.