All posts
langgraphagentsllmai

LangGraph: A Practical Guide for Full-Stack Developers

A practical guide to LangGraph — setup, core concepts, common mistakes, and production tips for full-stack developers.

SR

Suhail Roushan

April 21, 2026

·
6 min read

LangGraph is a framework for building stateful, multi-actor applications with LLMs, enabling complex workflows that go beyond simple prompts.

If you've built applications with large language models, you've likely hit a wall with simple, linear chat. Real-world tasks—like a customer support bot that checks a database, then a knowledge base, then drafts an email—require orchestration. That's where LangGraph comes in. It's a library from the LangChain team that lets you model these workflows as graphs, where nodes are functions and edges define the flow of control. I've found it indispensable for moving from prototypes to robust, multi-step AI agents. This guide will walk through its core concepts, practical code, and when it's the right tool for the job.

Why LangGraph Matters (and When to Skip It)

LangGraph matters because it brings software engineering principles to LLM workflows. Instead of a tangled mess of if statements and callbacks, you define a clear, visualizable graph. This makes complex, stateful agents—which need memory, tools, and conditional logic—testable and maintainable.

However, be opinionated about when to use it. If your application is a single LLM call with a static prompt, skip LangGraph. You're adding complexity for no benefit. Use it when you have a true workflow: distinct steps, conditional routing, or a need for persistent state between interactions. I see developers reach for it too early; start with simple chains, and introduce LangGraph only when the logic becomes unwieldy.

Getting Started with LangGraph

The fastest way to understand LangGraph is to build something. You'll need the @langchain/langgraph package and an LLM. Let's create a minimal agent that decides whether to use a calculator.

First, install the packages:

npm install @langchain/langgraph @langchain/openai

Now, here's a basic graph with two nodes:

import { StateGraph, Annotation } from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { z } from "zod";
import { calculator } from "./tools/calculator"; // Assume a simple tool

// 1. Define your application's state
const StateAnnotation = Annotation.Root({
  messages: Annotation<Array<any>>([]), // Holds the conversation
  needsCalculation: Annotation<boolean>(false) // Custom state
});

// 2. Define your nodes (functions)
async function llmNode(state: typeof StateAnnotation.State) {
  const model = new ChatOpenAI({ modelName: "gpt-4o-mini" });
  // Bind the calculator tool to the model
  const modelWithTools = model.bindTools([calculator]);
  const response = await modelWithTools.invoke(state.messages);
  return { messages: [response] };
}

async function calculationNode(state: typeof StateAnnotation.State) {
  // This node runs if a tool is called
  const toolCalls = state.messages[state.messages.length - 1].tool_calls;
  const results = await ToolNode.invoke({ messages: toolCalls });
  return { messages: results };
}

// 3. Build the graph
const workflow = new StateGraph(StateAnnotation)
  .addNode("llm", llmNode)
  .addNode("calculator", calculationNode)
  .addEdge("__start__", "llm") // Start at the LLM
  .addConditionalEdges("llm", (state) => {
    // Route based on whether the LLM called a tool
    const lastMessage = state.messages[state.messages.length - 1];
    return lastMessage.tool_calls?.length > 0 ? "calculator" : "__end__";
  })
  .addEdge("calculator", "llm"); // After calculating, go back to LLM

const app = workflow.compile();

This graph now has a cycle: LLM -> (maybe) Calculator -> LLM. It's a simple but powerful pattern for tool-use agents.

Core LangGraph Concepts Every Developer Should Know

1. State Management: Everything in LangGraph revolves around a shared state object. You define its schema upfront using Annotation. Nodes are functions that read from and write to this state. This is different from LangChain's sequential chains; here, state is mutable and global to the graph.

2. Nodes and Edges: A node is any async function that takes state and returns a state update. An edge defines what node to run next. The key power comes from addConditionalEdges, which lets you route dynamically based on the state.

3. Cycles and Interruption: Unlike directed acyclic graphs (DAGs), LangGraph supports cycles. This is essential for agents that loop "think -> act -> think." You can model human-in-the-loop workflows by adding a node that pauses for user input.

4. Checkpointing: This is LangGraph's superpower for persistence. The runtime can save the state of the graph at any point (a checkpoint), allowing you to pause and resume long-running workflows. This is critical for production, like handling a support ticket over days.

Here's a quick example of a conditional edge using a custom state field:

// Continuing from the previous graph definition
workflow.addConditionalEdges("llm", (state) => {
  // Check our custom flag
  if (state.needsCalculation) {
    return "calculator";
  }
  // Check for tool calls from the LLM
  const lastMsg = state.messages[state.messages.length - 1];
  if (lastMsg.tool_calls) {
    return "calculator";
  }
  return "__end__";
});

Common LangGraph Mistakes and How to Fix Them

Mistake 1: Overcomplicating the State Schema. Developers often dump everything into the state. This makes nodes brittle and hard to reason about. Fix: Keep state minimal. Use the Annotation system to define only the data each node needs. Group related fields into nested annotations.

Mistake 2: Forgetting to Return State Updates. A node must return an object that updates the state. A common error is to mutate the state object directly and return nothing, which can cause undefined behavior. Fix: Always return a partial state object, like return { messages: [newMessage] };.

Mistake 3: Ignoring Error Handling in Nodes. If a node (e.g., a tool call) throws an error, the entire graph crashes. Fix: Wrap node logic in try/catch blocks and design error states into your graph. You can have a dedicated errorHandler node that the graph routes to on failure.

When Should You Use LangGraph?

Use LangGraph when you are building a stateful, multi-step application with an LLM. This includes:

  • AI Agents: Systems that autonomously use tools (APIs, databases) in a loop.
  • Complex Chatbots: Support bots that must follow a strict workflow (e.g., collect info -> query KB -> escalate).
  • Long-Running Processes: Document processing pipelines where state must be saved and resumed.
  • Human-in-the-Loop Systems: Workflows that require approval or input from a person between automated steps.

Do not use LangGraph for simple retrieval-augmented generation (RAG) or single-turn chat. The overhead isn't justified. A good rule of thumb: if you can't draw your application's logic as a flowchart with at least one decision diamond, you probably don't need a graph.

LangGraph in Production

For production use on suhailroushan.com or any real project, keep these tips in mind:

1. Implement Robust Checkpointing: Use the built-in checkpointing with a persistent backend (like PostgreSQL). This allows you to recover from failures and manage long-lived sessions. The MemorySaver is for prototyping only.

2. Use Subgraphs for Modularity: Break down complex graphs into smaller, reusable subgraphs. This is similar to breaking down a large function into smaller ones. It makes testing and debugging far easier.

3. Add Observability Early: Instrument your graph with tracing (LangSmith is built for this). Log the state at each node and track execution paths. When a user says "the bot got stuck," you need to see exactly which node it was in and what the state was.

Start your next multi-step AI agent by sketching its flowchart first, then translate it directly into a LangGraph.

Related posts

Written by Suhail Roushan — Full-stack developer. More posts on AI, Next.js, and building products at suhailroushan.com/blog.

Get in touch