All posts
deepseek-apitypescriptnode.jsnext.jsxml-sitemap

How I Built an AI Blog Generator That Writes 48 Posts for Under ₹10

How I built an automated blog generation pipeline using DeepSeek chat API with retry logic, content validation, and sitemap generation — architecture decisions, key challenges, and what I'd do differently.

SR

Suhail Roushan

May 17, 2026

·
5 min read

I built an AI blog generator that writes 48 SEO-optimized posts for under ₹10, automating my content pipeline completely. This system uses the DeepSeek API, runs on a Node.js backend, and publishes to a Next.js site with an updated XML sitemap. It solved my need for consistent, affordable technical content without manual writing.

The core idea was simple: automate the entire workflow from prompt to publication. I needed a system that could generate posts in bulk, handle failures gracefully, validate the output, and seamlessly integrate the new content into my live portfolio at suhailroushan.com.

Architecture Overview

The system follows a linear pipeline with clear separation between generation, validation, and deployment. I chose a server-side Node.js script for the heavy lifting, keeping the Next.js frontend static and fast.

flowchart TD
    A[Topic & Keyword List] --> B[Node.js Generator Script]
    B --> C{DeepSeek API Call}
    C --> D[Retry Logic & Error Handling]
    D --> E[Content Validation]
    E --> F[Markdown File Creation]
    F --> G[Git Commit & Push]
    G --> H[Next.js Rebuild]
    H --> I[XML Sitemap Regeneration]

Each stage is a discrete module. The generator fetches a topic list, processes each through the AI, validates the response, writes files, and triggers the site update. This modularity made debugging straightforward—I could isolate failures to a specific stage.

Key Technical Decisions

Using TypeScript for the generator script was non-negotiable. The DeepSeek API response is a large, nested object; defining strict interfaces prevented silent bugs and made the code self-documenting.

interface BlogPost {
  title: string;
  slug: string;
  content: string;
  metaDescription: string;
  primaryKeyword: string;
  wordCount: number;
}

interface DeepSeekResponse {
  choices: Array<{
    message: {
      content: string;
    };
  }>;
  usage: {
    total_tokens: number;
  };
}

async function generatePost(topic: string): Promise<BlogPost> {
  const prompt = `Write a technical blog post about ${topic}...`;
  const response = await fetch('https://api.deepseek.com/chat/completions', {
    method: 'POST',
    headers: { 'Authorization': `Bearer ${process.env.DEEPSEEK_API_KEY}` },
    body: JSON.stringify({ model: 'deepseek-chat', messages: [{ role: 'user', content: prompt }] })
  });
  const data: DeepSeekResponse = await response.json();
  // Parse and validate the content here
  return parseAIContent(data.choices[0].message.content);
}

I implemented exponential backoff retry logic directly in the fetch call. The DeepSeek API is reliable, but network issues happen. Wrapping the call in a retry function with a maximum of three attempts saved several batches from failing.

async function fetchWithRetry(url: string, options: RequestInit, maxRetries = 3): Promise<Response> {
  let lastError: Error;
  for (let i = 0; i < maxRetries; i++) {
    try {
      const response = await fetch(url, options);
      if (response.ok) return response;
      throw new Error(`HTTP ${response.status}`);
    } catch (error) {
      lastError = error as Error;
      const delay = Math.pow(2, i) * 1000; // Exponential backoff: 2s, 4s, 8s
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }
  throw lastError;
}

What Broke and How I Fixed It

The first major issue was malformed markdown. The AI occasionally returned content with broken headers or unclosed code blocks. I wrote a validation function that checks for basic markdown structure and truncates content at the first invalid sequence, logging the error for manual review.

function validateMarkdown(content: string): boolean {
  const lines = content.split('\n');
  let inCodeBlock = false;
  for (const line of lines) {
    if (line.startsWith('```')) inCodeBlock = !inCodeBlock;
    // Check for headers that don't have a space after #
    if (/^#{1,6}[^#\s]/.test(line)) return false;
  }
  if (inCodeBlock) return false; // Unclosed code block
  return true;
}

The second problem was cost overrun on the first test. I naively generated full-length posts (1000+ words) for all 48 topics in one batch, which nearly exhausted my initial API credit. I fixed this by adding a draftMode flag to the prompt, instructing the AI to write a shorter, structured outline first. I then generated full posts only for the outlines that passed validation, cutting token usage by over 60%.

How to Build Something Similar

Start with a simple Node.js script and a single topic. Don't build the full pipeline upfront. Use the DeepSeek API directly with a well-crafted prompt that specifies format, tone, and length.

Your first script should:

  1. Read a list of topics from a JSON file.
  2. Call the API for one topic.
  3. Save the response as a markdown file in your Next.js app/blog/posts/ directory.
  4. Manually verify the output.

Once that works, add retry logic, then validation, then batch processing. Integrate with your site's build process last. For Next.js, you can trigger a rebuild by committing the new markdown files to your repository—Vercel handles the rest.

Would I Build It the Same Way Again?

For a pure content generation pipeline, yes. The cost-effectiveness is unmatched: 48 posts for under ₹10 is roughly ₹0.20 per article. The technical stack is minimal and maintainable.

I would, however, invest more time in the initial prompt engineering. The quality variance between my first and final prompts was massive. A poorly structured prompt generates generic, low-value content. A precise, detailed prompt with examples generates posts that feel genuinely useful. I now keep a "prompt library" for different content types.

The one thing you should know before starting is that AI-generated content requires rigorous validation—you cannot just pipe the API response directly to your site. Build your validation logic first, not last.

Related posts

Written by Suhail Roushan — Full-stack developer. More posts on AI, Next.js, and building products at suhailroushan.com/blog.

Get in touch