WordPress Just Gave AI Agents the Keys to Your Content. Here’s What the Safety Model Doesn’t Cover.

There’s a chart doing the rounds on X right now. Organic traffic spiking sharply in early 2024, then a slow, sustained collapse through to March 2026, ending near zero. The comment from Lily Ray: “All the pages in the case study have since been 410’d… guess that didn’t work out so well.” Glenn Gabe recognised the site immediately. It’s not an isolated case. It’s the pattern.

The site in question scaled content fast using AI. The traffic came. Then the penalty came. Then the pages were deleted. Two years of work, gone.

I’m sharing that context because on 20 March 2026 — two days ago — WordPress.com announced that AI agents can now create, edit, and publish content directly on your site through MCP write capabilities. Claude, ChatGPT, Cursor — any MCP-enabled tool can now draft posts, build pages, manage comments, organise categories, and update media metadata through natural language conversation.

The feature is real. The productivity gains are real. And the governance gap is real. This is not an argument against using it. It’s an argument for understanding what it doesn’t protect you from before you enable it.

What WordPress’s safety model does cover

To be fair to Automattic, the safety model is more thoughtful than most AI feature launches. Every action requires explicit confirmation before it executes. New posts default to drafts rather than publishing live. Deletions go to trash with 30-day recovery. All agent activity is logged in the Activity Log. And crucially, the system respects existing WordPress user role permissions — an Editor can’t change site settings, a Contributor can’t publish without review.

These are sensible defaults. The draft default in particular is the right instinct. If your team actually uses it.

What it doesn’t cover

The approval model is only as good as the person approving. The safety model assumes that when your AI agent asks “shall I publish this?”, the person confirming has actually read and evaluated the content. In practice, a busy marketing assistant approving AI-generated posts at speed is not a governance layer. It is a rubber stamp with extra steps. The confirmation dialogue is not the same as editorial review — and for any client in a regulated sector, that distinction is the difference between a helpful tool and a liability.

I work with law firms. A solicitor’s content carries their professional reputation and sits within SRA regulatory frameworks. “Your AI agent confirms before publishing” does not satisfy the requirement that a qualified person has reviewed the content for accuracy, jurisdiction-specific correctness, and appropriate legal caveats. An AI writing about criminal defence in Scotland may use English law. An AI trained on US legal content will use the word “attorney” rather than “solicitor”. It will default to American English. It will confidently state procedural details that are correct in one jurisdiction and wrong in another. None of that gets caught by a confirmation dialogue.

There is no citation standard applied before content goes live. The AI agent creates content and publishes it. Nothing in this workflow checks whether that content meets the structural requirements for AI citation before it goes out. Speed of publication is not the same as quality of content. A post that fails the CITATE criteria — no standalone opening, no explicit definition, no named statistic, no entity attribution — will be retrieved by AI systems and used anonymously. It will not be cited by name. Publish faster, cite less. That is the current trajectory for anyone who enables this without a content standard applied before the draft goes live.

Content provenance is unresolved for regulated sectors. If an AI agent publishes a post, who authored it? WordPress.com does not address this in the announcement. For E-E-A-T purposes, for Google’s quality evaluation, for SRA compliance, for medical sector accuracy requirements — authorship is not a metadata question. It is an accountability question.

Speed without quality compounds the wrong problem. The chart I described at the start is the result of scaling content production faster than quality control. WordPress.com’s MCP write capabilities are a significant step up from the bulk AI content tools that caused those penalties — but the underlying risk is the same if the workflow treats AI generation and human approval as equivalent steps.

The question your team needs to answer before enabling this

Not “should we use it?” but “what is our editorial layer?”

Who is responsible for reviewing AI-generated drafts before they are approved for publication? Is that person qualified to assess the accuracy of the content in your specific sector? What is your process for jurisdiction-specific accuracy? How will AI-generated content be attributed for E-E-A-T purposes? What content standard applies before a draft is considered ready for human review?

What this means for AI citation specifically

There is a version of this feature that is genuinely valuable for AI search visibility. An AI agent that creates a draft, applies the CITATE criteria during drafting, flags which sections are missing named statistics or explicit entity attribution, and surfaces that for human review before publication — that is a workflow that produces content faster and cites better than most human-only processes.

That is not what WordPress.com has built yet. What they have built is a capable content production tool with good defaults. The citation standard is the missing layer. And until it exists natively, the businesses that will benefit most from this feature are the ones that have already defined their own content standard and apply it before the draft goes live.

The AI is as good as what you put into it. Speed is not a substitute for context. For the broader picture on where agentic content tools sit in the AI visibility stack, see From Answers to Actions. For the content standard that applies before any AI-generated content goes live, see the CITATE framework. For a diagnostic of where your current content is failing the citation standard, the AI Visibility Audit is the starting point.

Related topics:

agentic-seo ai-seo ai-visibility content-seo future-of-seo llm-optimisation search-trends
Sean Mullins

Founder of SEO Strategy Ltd with 20+ years in SEO, web development and digital marketing. Specialising in healthcare IT, legal services and SaaS — from technical audits to AI-assisted development.