All posts
mcpllmtool-callingai

Model Context Protocol: A Practical Guide for Full-Stack Developers

A practical guide to Model Context Protocol — setup, core concepts, common mistakes, and production tips for full-stack developers.

SR

Suhail Roushan

May 1, 2026

·
6 min read

Model Context Protocol (MCP) is an open standard that lets you connect LLMs like Claude or GPT to your internal data sources and tools through a simple client-server architecture.

If you're building AI-powered features, you've likely faced the "context problem": how do you give a large language model access to your database, internal APIs, or file system without pasting everything into the prompt? Model Context Protocol solves this by creating a standardized way for AI applications to discover and use external resources. Instead of building custom integrations for each project, MCP gives you a common interface that works across different AI assistants and data sources. I've implemented it at Anjeer Labs to connect our internal systems to Claude Desktop, and it dramatically reduced the boilerplate code we previously needed.

Why Model Context Protocol Matters (and When to Skip It)

MCP matters because it decouples your AI interface from your data layer. Before MCP, if you wanted Claude to query your PostgreSQL database, you'd write custom scripts that format SQL results into prompts. If you then wanted the same capability in ChatGPT, you'd rewrite everything. MCP standardizes this interaction so your "resources" (databases, APIs, tools) become reusable across different AI clients.

The protocol uses a simple JSON-RPC over stdio or HTTP, making it language-agnostic. Your MCP server (which exposes your resources) can be written in TypeScript, Python, or Go, while clients like Claude Desktop or your own AI app can connect to it transparently.

Skip MCP if you're building a one-off, simple integration that will never need to work with another AI interface. Also skip it if your use case involves real-time streaming responses to complex tool calls—MCP's current transport layers have limitations here. But for most full-stack developers who want to give AI assistants safe, structured access to their systems, MCP is becoming the default choice.

Getting Started with Model Context Protocol

The fastest way to understand MCP is to build a simple server. You'll need the @modelcontextprotocol/sdk for TypeScript/JavaScript. Here's a minimal server that exposes a "greet" tool:

import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';

const server = new Server(
  {
    name: 'example-server',
    version: '1.0.0',
  },
  {
    capabilities: {
      tools: {},
    },
  }
);

server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [
    {
      name: 'greet',
      description: 'Generate a greeting for a person',
      inputSchema: {
        type: 'object',
        properties: {
          name: {
            type: 'string',
            description: 'Name of the person to greet',
          },
        },
        required: ['name'],
      },
    },
  ],
}));

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === 'greet') {
    const name = request.params.arguments?.name as string;
    return {
      content: [
        {
          type: 'text',
          text: `Hello, ${name}! From your MCP server.`,
        },
      ],
    };
  }
  throw new Error('Tool not found');
});

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error('MCP server running on stdio');
}

main().catch((error) => {
  console.error('Server error:', error);
  process.exit(1);
});

Install the SDK with npm install @modelcontextprotocol/sdk, then run this script. It won't do anything visible yet—you need a client like Claude Desktop to connect to it. But this is the foundation: a server declaring tools and handling requests.

Core Model Context Protocol Concepts Every Developer Should Know

1. Resources vs. Tools: MCP distinguishes between passive data (resources) and active operations (tools). Resources are read-only data sources like database tables or file contents, while tools are functions the AI can execute. This separation is crucial for designing secure MCP servers—you might expose read-only SQL queries as resources but require tool calls for mutations.

2. Structured Outputs: Every tool call returns content in a structured format. Here's a more practical tool that fetches user data:

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === 'get_user_by_id') {
    const userId = request.params.arguments?.userId as string;
    // Simulate database fetch
    const user = await mockDb.users.find(userId);
    
    return {
      content: [
        {
          type: 'text',
          text: `User Details:\nID: ${user.id}\nName: ${user.name}\nEmail: ${user.email}`,
        },
        {
          type: 'resource',
          resource: {
            uri: `user://${user.id}`,
            mimeType: 'application/json',
            text: JSON.stringify(user, null, 2),
          },
        },
      ],
    };
  }
});

Notice we return both human-readable text and a structured JSON resource. The AI client can use either format.

3. Discovery Protocol: Clients discover capabilities through standardized requests (mcp.listTools, mcp.listResources). Your server must handle these requests. The protocol is intentionally simple—no complex negotiation, just a handshake and capability listing.

4. Transport Independence: MCP works over stdio (for local integrations) or HTTP (for remote servers). Use stdio when your AI client and data are on the same machine for maximum security. Use HTTP when you need to expose resources to cloud-based AI services, but be meticulous about authentication.

Common Model Context Protocol Mistakes and How to Fix Them

Mistake 1: Exposing raw database connections. I've seen developers pass direct database clients to tool handlers. This is dangerous—a malformed prompt could trigger a DROP TABLE. Instead, wrap database access in hardened functions with strict validation and query limits.

Mistake 2: Ignoring pagination with large resources. If your list_files tool returns 10,000 items, you'll exhaust the AI's context window. Implement cursor-based pagination in your tool definitions and results. MCP doesn't enforce pagination patterns, but you should.

Mistake 3: Overloading tool descriptions. A tool description like "Handles user data" is useless to the LLM. Be specific: "Fetches a user profile by ID from the PostgreSQL users table. Returns name, email, and signup date." Better descriptions lead to more accurate tool usage.

When Should You Use Model Context Protocol?

Use MCP when you need to give AI assistants repeatable, structured access to your internal systems. Common scenarios include letting customer support AI query order databases, enabling developers to ask questions about codebases via IDE assistants, or building internal chatbots that can fetch project metrics. It's particularly valuable when you want the same capabilities available across multiple AI interfaces (Claude, ChatGPT, your own app) without rewriting integrations.

Avoid MCP for simple, static context that fits easily in a prompt. If you're just prepending a few documents to every request, traditional prompt engineering is simpler. Also avoid MCP for highly latency-sensitive applications—the extra network hop between client and server adds overhead.

Model Context Protocol in Production

In production at suhailroushan.com, I run MCP servers in isolated containers with strict resource limits. Each server handles one domain (e.g., database, GitHub API, internal CMS). This follows the Unix philosophy: do one thing well.

Always implement authentication, even for stdio transports. Validate that the connecting client is your trusted AI application. For HTTP transports, use short-lived tokens and scope them to specific resources.

Monitor tool usage patterns. Log which tools are called, with what arguments, and their execution time. This data helps you optimize performance and identify potential prompt injection attempts. I've found that about 20% of tools get 80% of the usage—focus on making those rock-solid.

Start by wrapping one critical data source with MCP, connect it to an AI assistant you use daily, and expand from there.

Related posts

Written by Suhail Roushan — Full-stack developer. More posts on AI, Next.js, and building products at suhailroushan.com/blog.

Get in touch