Back
Web Development

How to create an automated blog with n8n and Astro using AI to generate posts

The problem: loose ideas that never become posts

I have a classic problem: many technical ideas and little time to develop them.

I jot things down in Notion, in a notepad… and there they stay. The bottleneck isn’t the idea. It’s sitting down, structuring, writing the frontmatter, creating the file in the Astro collection, and getting it ready to publish.

So I did what any developer with a tendency to automate everything would do: I set up a workflow in n8n that takes an idea, develops it with AI, and directly generates the .mdx inside src/content/blog in my Astro project.

No copy and paste. No opening the editor.

Just an idea. The workflow does the rest.


General architecture

The stack is simple:

  • Astro with @astrojs/content to manage the blog collection.
  • n8n as the orchestrator.
  • OpenAI (or compatible model) to generate the content.
  • Access to the repo via GitHub API.

The workflow does this:

  1. Receives an idea (manual or from webhook).
  2. Passes it to a structured prompt.
  3. The AI returns the complete post in Markdown, including frontmatter.
  4. n8n creates an .md file in the Astro collection.
  5. Makes an automatic commit.

The project structure in Astro is typical:

src/
  content/
    blog/
      how-i-optimized-lcp.md
      modular-frontend-architecture.md
  content.config.ts

I use typed collections. Something like this:

// src/content.config.ts
import { defineCollection, z } from 'astro:content';

const blog = defineCollection({
  type: 'content',
  schema: z.object({
    title: z.string(),
    description: z.string(),
    pubDate: z.date(),
    draft: z.boolean().default(false),
    tags: z.array(z.string()),
    categories: z.array(z.string()),
    lang: z.enum(['es', 'en'])
  })
});

export const collections = {
  blog
};

If the frontmatter doesn’t match the schema, the build fails. And that’s key: the AI has to generate valid content.


Step 1: Design the prompt as if it were an API

The biggest mistake when using AI in automations is treating it like a chat.

Here I don’t want uncontrolled creativity. I want rigid structure. Deterministic as much as possible.

My base prompt in n8n looks like this:

You are Álvaro Moreiro, senior web developer...

[Complete style instructions]

Generate a technical post following EXACTLY this format:

1. Valid YAML frontmatter.
2. Content in markdown.
3. No text outside the markdown.
4. Start with --- and end with the last paragraph.

Post data:
- Topic: {{$json.idea}}
- Language: en
- Level: intermediate
- Date: {{$now}}

I don’t improvise here. The prompt is long and specific.

If you don’t force the format, sooner or later the AI adds a line outside the frontmatter and breaks your build.


Step 2: Create the workflow in n8n

My workflow in n8n has these nodes:

  1. Trigger
  2. Set / Transform
  3. OpenAI
  4. Function (optional cleanup)
  5. GitHub
  6. (Optional) Notification

Trigger

I use two modes:

  • Manual trigger for testing.
  • Webhook for integration with other tools.

Webhook example:

POST /webhook/blog-idea
{
  "idea": "How to use edge functions with Astro for personalization"
}

That idea travels through the entire workflow.


OpenAI Node

In n8n I configure the node with:

  • Model: gpt-5.2 or equivalent.
  • Temperature: 0.7 (I don’t go higher).
  • Response format: plain text.

The input is the complete prompt plus the dynamic idea.

Depending on the node version, the goal is to output a huge string with the entire post.


Step 3: Generate the filename automatically (Optional)

I don’t want files like post-123.md. I want clean slugs.

So after the AI node, I add a Function node to:

  1. Extract the title from the frontmatter.
  2. Convert it to a slug.
  3. Build the final path.

Example:

const content = $json.post;

// Extract title from frontmatter
const match = content.match(/title:\s*"(.+)"/);
if (!match) {
  throw new Error("Could not extract title");
}

const title = match[1];

const slug = title
  .toLowerCase()
  .normalize("NFD")
  .replace(/[\u0300-\u036f]/g, "")
  .replace(/[^a-z0-9\s-]/g, "")
  .trim()
  .replace(/\s+/g, "-");

return [
  {
    json: {
      content,
      slug,
      path: `src/content/blog/${slug}.md`
    }
  }
];

This avoids weird characters and accents.

I prefer generating the slug from the actual title because it maintains semantic coherence. If the title changes, the slug does too.


Step 4: Create the file on GitHub

Here I use the official GitHub node.

Operation: Create File

Key fields:

  • Repository
  • Branch: main
  • File Path: {{$json.path}}
  • Content: {{$json.content}}
  • Commit Message: feat(blog): add {{$json.slug}}

n8n converts the content to base64 automatically if you use the official node.


Handling real errors

This is where it stops being an experiment and becomes a system.

1. Minimal frontmatter validation

Before sending to GitHub, I add another check:

const requiredFields = [
  "title:",
  "description:",
  "pubDate:",
  "draft:",
  "tags:",
  "categories:",
  "lang:"
];

for (const field of requiredFields) {
  if (!content.includes(field)) {
    throw new Error(`Missing field ${field}`);
  }
}

It’s not perfect. But it prevents broken commits.


2. Avoid text outside the markdown

Sometimes the model adds something like:

Here’s the post:

That breaks everything.

Solution: enforce in the prompt and, if needed, trim from the first occurrence of ---.

const startIndex = content.indexOf('---');
if (startIndex > 0) {
  content = content.slice(startIndex);
}

I prefer preventing in the prompt rather than patching afterwards.


3. Control length

If the post is too short, I discard it.

if (content.length < 5000) {
  throw new Error("Content is too short");
}

I don’t want mediocre posts published automatically.


Advanced variant: workflow with optional human review

I don’t always publish directly.

Sometimes I prefer the workflow to:

  1. Generate the file.
  2. Create a Pull Request instead of committing to main.

With the GitHub node you can create a dynamic branch:

feature/auto-post-{{$json.slug}}

Then:

  • Create file on that branch.
  • Create Pull Request.

This way I review the content before merging.

It’s the middle ground between full automation and editorial control.


What I learned building this

AI needs clear boundaries

The more open the prompt, the worse the result.

When I defined:

  • Exact format.
  • Mandatory structure.
  • Style rules.
  • Hard constraints.

Quality improved significantly.


n8n scales better than it seems

At first I saw it as a low-code tool.

But with:

  • Function nodes
  • JS expressions
  • Webhooks
  • API integrations

It becomes a very serious automation backend. In future posts we’ll see how it can be deployed self-hosted super quickly and cheaply.

For this case, I don’t need to set up a microservice in Node. n8n already handles the orchestration for me.


Astro fits perfectly for this

Astro + content collections is ideal for automatic generation.

Because:

  • Content is files.
  • The schema is typed.
  • The build fails if something doesn’t match.

If I used a traditional CMS, I’d have to validate against API, states, etc.

Here it’s Git + Markdown. Simple. Predictable.


Extensions I have in mind

This is just the base. It can be taken further:

  • Generate featured images with AI and save them in /public/images.
  • Automatically extract tags based on content.
  • Generate English version from the same workflow.
  • Send the post to a newsletter.

All orchestrated from n8n.

Once you have the workflow, adding nodes is trivial.


Conclusion

Automating my blog isn’t about publishing more. It’s about reducing friction.

I already have the idea. The criteria too. What I automate is the mechanical part: structuring, formatting, creating the file, committing.

n8n gives me the glue. Astro gives me structure and validation. AI does the heavy lifting of writing.

The result: a system where an idea becomes a published post in minutes. And that completely changes the speed at which I can build technical content.