All posts
vercelai-sdknextjsllm

Vercel AI SDK: A Practical Guide for Full-Stack Developers

A practical guide to Vercel AI SDK — setup, core concepts, common mistakes, and production tips for full-stack developers.

SR

Suhail Roushan

May 3, 2026

·
6 min read

The Vercel AI SDK is a TypeScript toolkit that lets you build full-stack AI applications with streaming, tool calling, and unified provider APIs in under 100 lines of code.

If you're building an AI feature into a web app, you've likely faced the tedious work of wiring up chat completions, managing streaming responses, and handling different provider APIs. The Vercel AI SDK abstracts this complexity into a clean, framework-agnostic library. I've integrated it into several production projects at Anjeer Labs, and it consistently cuts development time for AI features in half. This guide will walk through its core concepts, show you real code, and explain when it's the right tool for the job.

Why Vercel AI SDK Matters (and When to Skip It)

The SDK's primary value is unification. Instead of writing bespoke logic for OpenAI, Anthropic, or Google Gemini, you use one consistent interface. It also bakes in first-class support for React and Next.js, with hooks like useChat that handle streaming UI updates automatically. This is a massive productivity win.

However, you should skip the Vercel AI SDK if your project is a simple, one-off script calling a single LLM API. The abstraction adds overhead you don't need. It's also not ideal if you require extremely low-level control over HTTP connections or are building a non-JavaScript/TypeScript backend. For full-stack web apps, though, it's often the fastest path to a polished AI feature.

Getting Started with Vercel AI SDK

Let's set up a basic AI chat endpoint in a Next.js App Router project. First, install the core package and the provider you need.

npm install ai openai

Now, create a route handler at app/api/chat/route.ts. This minimal example uses OpenAI, but swapping providers requires changing just the import and the model name.

import { OpenAI } from 'openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';

// Allow dynamic runtime for edge functions, optional
export const runtime = 'edge';

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY || '',
});

export async function POST(req: Request) {
  const { messages } = await req.json();

  const response = await openai.chat.completions.create({
    model: 'gpt-4-turbo',
    stream: true,
    messages,
  });

  const stream = OpenAIStream(response);
  return new StreamingTextResponse(stream);
}

On the client, you can consume this stream with the SDK's React hook.

'use client';
import { useChat } from 'ai/react';

export function ChatComponent() {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    api: '/api/chat',
  });

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>{m.role}: {m.content}</div>
      ))}
      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Say something..."
        />
      </form>
    </div>
  );
}

With this, you have a fully functional, streaming chat interface. The useChat hook manages the message history, the ongoing request, and the streamed response updates.

Core Vercel AI SDK Concepts Every Developer Should Know

1. Unified Provider Abstraction The ai package provides a generateText or streamText function that works across providers. You define your model once.

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

async function handleRequest() {
  const result = streamText({
    model: openai('gpt-4-turbo'),
    prompt: 'Explain quantum computing in one sentence.',
  });

  // result.textStream is a ReadableStream
  for await (const chunk of result.textStream) {
    console.log(chunk);
  }
}

2. First-Class Tool Calling Defining and using tools (formerly "functions") is streamlined. You define a schema and a handler, and the SDK manages the execution loop.

import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';

const result = streamText({
  model: openai('gpt-4-turbo'),
  tools: {
    getWeather: {
      description: 'Get the weather for a city',
      parameters: {
        city: { type: 'string' }
      },
      execute: async ({ city }) => {
        // Your API call here
        return `The weather in ${city} is 72°F and sunny.`;
      }
    }
  },
  prompt: 'What's the weather in Hyderabad?',
});

3. Structured Output Getting JSON from an LLM is notoriously tricky. The SDK's generateObject function makes it reliable.

import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const { object } = await generateObject({
  model: openai('gpt-4-turbo'),
  schema: z.object({
    summary: z.string(),
    sentiment: z.enum(['positive', 'neutral', 'negative']),
    keywords: z.array(z.string()),
  }),
  prompt: 'Analyze this user feedback: "The new dashboard is fast and intuitive, but I miss the old export feature."',
});

console.log(object.sentiment); // "positive"

Common Vercel AI SDK Mistakes and How to Fix Them

1. Not Handling Stream Termination Properly When a user navigates away from a page while a stream is active, you must abort the request. The hooks provide a stop function for this.

const { stop, isLoading } = useChat();

// Call stop() on component unmount or via a cancel button
<button onClick={stop} disabled={!isLoading}>Stop Generation</button>

2. Forgetting to Pass the stream: true Flag If you use the raw provider client (like openai.chat.completions.create) instead of the SDK's streamText, you must explicitly set stream: true. Otherwise, you'll get a buffered response, defeating the purpose of the streaming utilities.

3. Misunderstanding the messages Array Format The SDK expects and returns messages in a specific shape: { role: 'user' | 'assistant' | 'system', content: string }. A common error is trying to pass a plain string or an object with extra fields. Always structure your input to match this.

When Should You Use Vercel AI SDK?

Use the Vercel AI SDK when you are building a web application that requires real-time, streaming AI interactions (like a chat interface) and you want to maintain the flexibility to switch between LLM providers. It's also an excellent choice if you're using Next.js or React and want to leverage built-in hooks for state management. Avoid it for simple backend batch processing or if your team is committed to a different backend stack like Python/Django without a JavaScript frontend.

Vercel AI SDK in Production

For production use on suhailroushan.com and other projects, I follow two key practices. First, always implement robust error handling and fallbacks. The SDK's helper functions can throw, so wrap them in try-catch blocks and consider a fallback model. Second, use the environment-based provider configuration. Don't hardcode a model name; use environment variables to switch between, for example, gpt-4-turbo in production and a cheaper model like gpt-3.5-turbo in development.

Instrument everything. Log token usage, response times, and errors. While the SDK doesn't provide telemetry out of the box, you can intercept streams or use provider-specific callbacks to collect this data, which is crucial for cost management and performance monitoring.

Start your next AI feature by defining your tool schemas with Zod—it will force you to think clearly about the data structure before you write a line of LLM interaction code.

Related posts

Written by Suhail Roushan — Full-stack developer. More posts on AI, Next.js, and building products at suhailroushan.com/blog.

Get in touch