Skip to main content
Technical Blueprints

AI WordPress Automation With DeepSeek, n8n, and Baserow

How I run AI WordPress automation in production: a self-hosted n8n + Baserow + DeepSeek stack that drafts posts at 2% of GPT-4 cost without SEO penalty.

Published Updated 13 min read

I’ve been running this AI WordPress automation stack on agency content pipelines for the past year, and it has changed the calculus on what an editorial team can ship in a month. The setup is three self-hosted containers (n8n, Baserow, and a thin DeepSeek client) drafting WordPress posts directly into the CMS as draft status, where a human editor finishes them. No SaaS subscriptions, no per-seat content tools, no data leaving the VPS.

This post is the actual stack I deploy, with the configuration mistakes I’ve made along the way and a few opinions about where AI-generated content earns its keep versus where it lights your SEO on fire. If you’ve been hand-waving about “AI workflows” for clients without committing to a real one, the next forty minutes of reading will save you a quarter’s worth of agency content meetings.

The Compose files in this post are illustrative. The production-tested versions live in our Webnestify Docker repo. Pull from there before pasting anything into a real server.

Why this stack, and not ChatGPT plus a Google Sheet

The “ChatGPT plus a spreadsheet” workflow is what most agencies start with, and it works fine for one or two posts a week. It falls apart the moment you have a real editorial calendar with multiple writers, target keywords, draft states, and a publishing schedule. You end up with a Sheet that nobody trusts, a tab graveyard of half-prompts, and a Slack channel full of “is this one approved or not?” messages.

A self-hosted stack fixes the parts that hurt at scale:

  • Baserow holds the editorial state machine. Every row is a post in flight: topic, target keyword, draft status, assigned reviewer, due date, publish date, model version, the actual draft text. One source of truth.
  • n8n is the glue. It pulls rows ready for drafting, calls DeepSeek with the right prompt, writes the result back to Baserow, and pushes the approved drafts to WordPress on a schedule.
  • DeepSeek is the drafting engine. Cheaper than GPT-4 by an order of magnitude, English output that’s good enough for first-draft work, and an API that behaves like OpenAI’s so the n8n integration is mostly copy-paste.

The other reason to self-host: the data. Editorial pipelines for agency clients touch unpublished content, internal SEO research, and sometimes confidential product details. I am not putting that on someone else’s cloud where a future product change might quietly start training a model on it.

If you’ve never deployed n8n, start with my n8n self-hosted workflow automation guide. The Baserow + DeepSeek pieces sit on top of that same baseline.

What each tool actually does in the pipeline

Before the YAML, the mental model. If you can hold this in your head, the rest of the setup is mechanical.

Baserow: the editorial database

Baserow is a self-hosted no-code database. Think Airtable, but the data lives on your VPS. I use it for one table called posts_in_flight, with these columns:

  • topic (text, the working title)
  • target_keyword (text, the primary SEO keyword)
  • outline (long text, the human-written brief)
  • status (single select: briefed, drafting, drafted, editing, approved, published, killed)
  • draft_body (long text, populated by n8n after DeepSeek runs)
  • model_version (text, populated by n8n with the exact DeepSeek model used)
  • wordpress_post_id (number, populated after WordPress upload)
  • assigned_editor (single select)
  • target_publish_date (date)
  • notes (long text, editor’s revision notes)

That’s the whole schema. Eleven columns. Don’t over-engineer it on day one.

n8n: the workflow runner

n8n owns three workflows. Each one runs on a schedule trigger:

  1. Drafting workflow. Every 30 minutes, pull rows where status = briefed, call DeepSeek with the topic and outline, write the draft back to draft_body, set status = drafted.
  2. Publish-to-WordPress workflow. Every hour, pull rows where status = approved, push to WordPress as draft status (yes, both Baserow and WordPress have a “draft” status; they’re tracking different things), set status = published, write the WordPress post ID back.
  3. Health-check workflow. Once a day, count rows in each status, post a summary to Slack. If anything has been stuck in drafting for more than 24 hours, page the team.

That’s it. Three workflows, well under 100 nodes total.

DeepSeek: the LLM call

DeepSeek’s API is OpenAI-compatible. You hit https://api.deepseek.com/v1/chat/completions with a Bearer token and the same JSON shape you’d send to OpenAI. n8n’s HTTP Request node handles it natively, with no custom node needed.

The model I default to is deepseek-chat, which maps to the current production V3 model. For longer-form drafts I bump max_tokens to 4000 and temperature to 0.4. Low enough to keep the prose grounded, high enough to avoid the same opening sentence on every post.

The Docker stack

Three containers behind Nginx Proxy Manager. The full Compose file looks like this:

version: "3.7"

services:
  baserow:
    image: baserow/baserow:latest
    container_name: baserow
    restart: unless-stopped
    environment:
      - BASEROW_PUBLIC_URL=https://${BASEROW_SUBDOMAIN}.${DOMAIN_NAME}
      - DATABASE_HOST=baserow-db
      - DATABASE_NAME=baserow
      - DATABASE_USER=baserow
      - DATABASE_PASSWORD=${BASEROW_DB_PASSWORD}
    volumes:
      - ${DATA_FOLDER}/baserow:/baserow/data
    ports:
      - "8000:80"
    depends_on:
      - baserow-db
    labels:
      - "com.centurylinklabs.watchtower.enable=true"

  baserow-db:
    image: postgres:15
    container_name: baserow-db
    restart: unless-stopped
    environment:
      - POSTGRES_DB=baserow
      - POSTGRES_USER=baserow
      - POSTGRES_PASSWORD=${BASEROW_DB_PASSWORD}
    volumes:
      - ${DATA_FOLDER}/baserow-db:/var/lib/postgresql/data

  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    restart: unless-stopped
    ports:
      - "5678:5678"
    environment:
      - N8N_HOST=${N8N_SUBDOMAIN}.${DOMAIN_NAME}
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://${N8N_SUBDOMAIN}.${DOMAIN_NAME}/
      - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
      - NODE_ENV=production
    volumes:
      - ${DATA_FOLDER}/n8n:/home/node/.n8n
    labels:
      - "com.centurylinklabs.watchtower.enable=true"

Set the variables in a .env file:

BASEROW_SUBDOMAIN=data
N8N_SUBDOMAIN=automation
DOMAIN_NAME=yourdomain.com
GENERIC_TIMEZONE=Europe/Bratislava
DATA_FOLDER=/srv/docker/ai-content
BASEROW_DB_PASSWORD=replace_with_a_long_random_string

Bring it up:

docker compose -f /srv/docker/ai-content/docker-compose.yml up -d

Two Proxy Hosts in NPM, both with Force SSL, HTTP/2 Support, and WebSockets Support enabled. The WebSockets toggle bites you on n8n the first time you forget it. The editor loads but every workflow appears to hang at “Executing…” forever. Same trap as the Uptime Kuma deployment for anyone who’s been there.

The DeepSeek piece doesn’t need its own container. It’s a remote API; the n8n HTTP Request node calls it directly.

Before any of this goes on a public VPS, the server needs to be hardened. The Linux server security fundamentals post is the baseline I run on every box before I install Docker.

The drafting workflow in n8n

Here’s the actual node sequence for the drafting workflow. I’m describing it rather than pasting JSON because the export changes shape between n8n minor versions, and the structure is what matters:

  1. Schedule Trigger. Runs every 30 minutes.
  2. Baserow node (List rows). Filter status = briefed, limit 5. Don’t try to draft 50 posts in one run; rate limits and timeouts will bite.
  3. Loop node. Iterate over each row.
  4. HTTP Request node (DeepSeek). POST to https://api.deepseek.com/v1/chat/completions with the system prompt, the topic, and the outline as the user message.
  5. Code node. Strip whitespace, validate the response has at least 800 words, extract the model version from the response.
  6. Baserow node (Update row). Write draft_body, model_version, set status = drafted.
  7. Slack node (notification). Post “drafted: ” to the editorial channel.

The system prompt is the part that earns its keep. Mine is roughly:

You are a draft writer for a managed cloud hosting agency. You write first-person, opinion-led prose with specific numbers and at least one war-story anchor. Use plain English, sentence-case headings, and avoid AI-vocabulary tells (delve, leverage, robust, seamless, comprehensive). Output Markdown. Target length: 1500 words. Always include a 3-bullet key takeaway list at the top, an FAQ section with 4 questions, and exactly one contrarian opinion.

That last line, “exactly one contrarian opinion”, is the single most useful instruction in the prompt. Without it, every draft reads like a chamber-of-commerce blog post. With it, the editor has something to push against, which is the whole point of a draft.

n8n workflow editor for AI WordPress automation showing the Baserow trigger, DeepSeek HTTP request, and update node sequence

The n8n drafting workflow as it actually runs in production. The DeepSeek HTTP node is the slowest step; everything else takes milliseconds.

Credentials for both Baserow and DeepSeek live in n8n’s encrypted credential store, not in workflow JSON. I covered that trap in detail in the n8n self-hosted post. The short version: workflow exports reference credentials by ID, and the actual API keys never leave the server.

The Baserow setup, in practice

Once Baserow is running, the setup is mostly clicking through the UI. The only piece worth calling out is the API token.

In Baserow, go to Settings → API tokens, create a token scoped to the database that holds posts_in_flight, and give it read and update permissions on that one table. Don’t give it admin scope. Don’t reuse the token across multiple n8n instances.

The token goes into n8n as a Baserow credential (n8n has a first-class Baserow node, so this is a few clicks rather than a custom HTTP setup). Once it’s wired, the List Rows and Update Row nodes show your tables in a dropdown.

One concrete tip: turn on Baserow’s row-comment feature for the notes column. Editors leave revision notes there during the editing pass, n8n picks them up on the next run, and you get a poor man’s version of an editorial CMS without paying for one.

The publish-to-WordPress workflow

Three things have to be right here, and getting any of them wrong will create a real mess on a live site.

First: post status is always draft on the way out. Even if the editor has approved the post in Baserow, the WordPress upload sets status: draft. The actual publish happens by hand inside WordPress, so a human is the last to look at the rendered post before it goes live. Yes, this is slower than full auto-publish. Yes, it’s the right tradeoff.

Second: the featured image gets uploaded separately. WordPress’s REST API treats media as a separate resource. The n8n flow uploads the image to /wp-json/wp/v2/media, takes the returned ID, and references that ID as featured_media on the post create call. Don’t try to embed image URLs in the post body and call it done. They won’t be in the media library, and any editor moving them around will break the references.

Third: categories and tags are taxonomy IDs, not strings. The WordPress REST API wants categories: [12, 47], not categories: ["AI", "Automation"]. The n8n workflow has a small Code node that maps friendly names from Baserow to the WordPress taxonomy IDs. Build that mapping once per client and reuse it.

For the WordPress credential, I create a dedicated user with the Editor role (not Administrator, never Administrator for an automation account) and use an Application Password. The token goes into n8n’s credential store. If a token leaks, rotating it is a 30-second job in WP-Admin under Users → Profile → Application Passwords.

If a client’s WordPress login ever gets locked out from this kind of automation work, the WordPress admin recovery post is the playbook I follow.

The SEO trap that ends pipelines

Every AI content pipeline I’ve seen fail in production failed for the same reason: the team published the drafts directly. Google’s helpful-content system treats high-volume, structurally similar AI prose as a quality signal in the wrong direction, and the resulting traffic decline is brutal. I’ve watched a client lose 60% of organic traffic in eight weeks after their previous agency wired ChatGPT directly to WordPress with no editor in the loop.

The fix is not “make the AI better.” The fix is process:

  • Editor pass on every post. The editor adds a first-person anchor (a real war story, a real number, a real opinion), restructures the headings to match the site’s voice, and removes the AI-vocabulary tells. Budget 30-45 minutes per post for this. If your editorial cost model can’t afford that, don’t run the pipeline.
  • Similarity check before publish. Run the draft through a simple similarity check against the existing published archive on the same domain. If it scores above 60% structural overlap with an existing post, kill the draft or merge it into the existing post as an update. n8n’s HTTP node can hit any of the off-the-shelf similarity APIs for this.
  • Drip the publishing schedule. Don’t publish 50 posts in one day even if you have 50 ready. A normal editorial cadence is 1-3 posts a day. Anything past that looks like content spam to Google, regardless of whether the prose is human or AI.
  • Track cannibalization. Two AI drafts on related keywords will cannibalize each other in search results. Keep a target-keyword column in Baserow and reject any new brief whose target keyword is already assigned to a post in flight or a published post.

For agency clients with WordPress sites, this pipeline lives downstream of a hardened CMS install. The comprehensive WordPress security guide and the CrowdSec WordPress integration post both feed into that same baseline. Secure the CMS first, then bolt content automation onto it.

What I deliberately leave out

A few things that look obvious in a Hacker News comment thread but I don’t actually deploy:

  • Vector databases for RAG. For drafting blog posts on topics the model already knows about, RAG adds infrastructure without improving output quality. I’d reach for it if the briefs required up-to-date product documentation, but a normal editorial calendar doesn’t need it.
  • Multi-model voting. Calling DeepSeek and OpenAI and picking the better output sounds clever and burns money for a marginal quality lift. Pick one model, tighten the system prompt, and put the saved budget into the editor pass.
  • Fully automated publishing. Already covered. The discipline is non-negotiable; the pipeline does not flip the WordPress status to publish from n8n. Ever.
  • A custom n8n node for DeepSeek. The HTTP Request node is fine. A custom node adds a build dependency for marginal ergonomic gain.
  • Self-hosting the LLM itself. For draft-grade content, the math doesn’t work. A self-hosted Llama or Mixtral instance needs a GPU box that costs more per month than 12 months of DeepSeek API usage at agency volume. Revisit the calculation in 18 months when the open-weights gap closes; today, a cheap API call wins.

If a client specifically asks for any of those, that’s a separate engagement. The “every editorial pipeline, day one” baseline is what’s in this post.

Closing the loop

This three-tool stack of n8n, Baserow, and DeepSeek has been running on the agency content pipelines I operate for about a year. We’ve shipped roughly 600 drafts through it across four clients, with the editor pass landing somewhere between 30 and 45 minutes per post depending on the topic. The cost line on DeepSeek has stayed under €40 per client per month. The cost line on the equivalent OpenAI usage would be around €1,800.

The point of AI WordPress automation isn’t to replace writers. It’s to move them up the value chain. Drafting is mechanical; editing is the part that compounds the brand voice. This stack does the mechanical part for almost nothing, leaves the compounding part to humans, and stops short of the cliff-edge mistake that gets agencies penalized in search.

If you’re already running n8n for marketing automation or newsletter delivery, bolting Baserow and DeepSeek onto the same VPS is a half-day of work. Start with one client, one editorial calendar, and one editor in the loop. Scale from there.

Watch on YouTube

Video walkthrough

Prefer the screen-recording version of this guide? Watch it on YouTube — opens in a new tab so the player only loads when you ask for it.

Frequently Asked Questions

Want this handled, not just understood?

Reading the playbook is one thing. Running it on production at 2am is another. If you'd rather have me run it for you, the door is open.

Apply for Access