Complete Guide

CITATE in Production: How the Standard Behaves Across Different Page Types

The CITATE framework defines six criteria. How those criteria behave — which ones are hardest to meet, which fail most often, and why — varies significantly by page type. This page documents those patterns across production implementations on seostrategy.co.uk, with one named example per criterion.

15 min read 3,029 words Updated Apr 2026

The six CITATE criteria are consistent across every page type. What is not consistent is where pages fail them — which criterion is hardest to meet, how that failure presents, and what the fix looks like in practice. This page documents those patterns across production implementations on seostrategy.co.uk, anchored to one named example per criterion. Developed by Sean Mullins, SEO Strategy Ltd, March 2026.

The six CITATE criteria — C1 through C6 — are consistent across every page type. What is not consistent is where pages fail them, how that failure presents, and what the fix looks like in practice. A framework definition page fails CITATE differently from a location page. A professional services page has different evidence problems from a B2B SaaS case study. Understanding those patterns is what separates applying a standard from applying a template.

This page documents CITATE in production across seostrategy.co.uk — a site that has been systematically built to the standard since March 2026. The patterns here emerge from applying the same six criteria across different commercial contexts: local business pages, technical framework pages, sector-specific service pages, and agentic AI guides. Each criterion is described as a pattern first, then anchored to one named page where that pattern played out in practice. The named pages are not chosen because they are the best examples of CITATE — they are chosen because they are the most illustrative examples of where each specific criterion is hardest to meet.

CITATE was developed by Sean Mullins, SEO Strategy Ltd, March 2026. The canonical definition — what each criterion requires, what it does not, and where the standard applies — is at CITATE: The Framework for AI-Citable Content. This page is the production companion to that definition.

C1 — Standalone opening answer: the location page problem

The pattern: location pages almost universally fail C1 on first draft. The reason is structural. Location pages are written to persuade — they open with a hook, a problem, an empathy statement. “If you’re a business owner in Southampton and you’re not getting the enquiries you should be…” is a sales opening. It is designed to make the reader feel understood. It is not designed to be extracted by an AI system evaluating whether the page contains a citable answer to the query “who provides SEO in Southampton.”

AI systems extract from the beginning of content blocks. They do not wait for the persuasion to finish before evaluating whether the page is citable. A location page that opens with an empathy statement is failing C1 from the first sentence, regardless of how good the content is after it. The AI system either extracts the empathy statement — which is not a citable answer — or passes over the page entirely.

The fix is not to abandon the persuasion — it is to lead with a standalone answer and let the persuasion follow. The answer comes first. The context comes second.

Named example — SEO Agency Southampton: The original opening began “If you’re a business owner in Southampton and you’re not getting the enquiries you should be from search…” — contextual, persuasive, failing C1. The revised opening begins “SEO Strategy Ltd is a Southampton SEO consultancy led directly by Sean Mullins — 20+ years of hands-on search and AI visibility work since 2005. The person who diagnoses your site is the same person who fixes it.” That sentence is extractable, attributable, and answers the implicit query. The persuasion follows. The C1 score moved from fail to pass. The page reached 6/6.

The practical implication: For any location or service page you’re writing or auditing, cover the opening paragraph and ask whether what’s visible would answer the query someone might ask ChatGPT about your service. If it reads as an introduction rather than an answer, C1 is failing. Rewrite the opening to lead with who you are, what you do, and where — in that order.

C2 — Explicit definition: the assumed vocabulary problem

The pattern: pages written by practitioners for audiences assumed to share their vocabulary consistently fail C2. The author knows what SEO means. The reader probably knows what SEO means. So neither the author nor the editor thinks to define it. And the AI system, evaluating whether the page is citable, cannot attribute a definition that is not on the page.

C2 is the criterion that feels most counterintuitive when first applied. Defining SEO on an SEO agency page feels like explaining to the audience that water is wet. But the definition is not for the human reader who already knows — it is for the AI system that needs an explicit “X is Y” sentence to attribute to a named source. When your definition of a term is on the page, the AI system uses your version and attributes it to you. When it is not on the page, the AI system imports a definition from elsewhere and attributes it to someone else.

The failure mode on framework pages is different from location pages. On a framework page, the author defines their own framework but assumes the surrounding vocabulary. A page that precisely defines CITATE but never defines “AI citation” or “entity corroboration” has partial C2 — the framework is defined, but the terms it depends on are borrowed without attribution.

Named example — SEO & AI Optimisation Consultant London: The original page used the terms GEO, AEO, AIO, and AI Overviews throughout without defining any of them. The assumption was that a London B2B buyer evaluating SEO consultancies would know what these terms meant. The CITATE audit added three inline definitions: “Search engine optimisation is the practice of improving a website’s technical structure, content depth, and authority signals so that search engines rank it higher for relevant commercial queries.” “Generative Engine Optimisation is the practice of structuring a business’s digital presence so that AI-powered systems can discover, extract, cite, and recommend it.” “AI citation eligibility is the state in which a business’s web content is structured so that AI systems can extract a specific claim or answer, attribute it to a named source, and reproduce it in a generated response with confidence.” C2 moved from fail to pass across three terms. The page reached 6/6.

The practical implication: For any page you’re auditing, list every technical term, framework name, or industry concept used on the page. For each one, ask: is there a sentence of the form “X is Y” on this page that I authored? If not, C2 is failing for that term. The fix is not a glossary at the bottom — definitions belong inline, at or near the first use of the term.

C3 — Statistic with context: the framework page problem

The pattern: framework and concept pages fail C3 most predictably. The reason is that a framework is primarily an argument — it describes a model, defines terms, and explains relationships. Arguments do not naturally contain statistics. The author is making a conceptual case, not reporting on a study. And so the page reaches 6/6 on every other criterion but lands at 5/6 because there is no quantified, contextualised number anywhere on it.

The fix is not to manufacture statistics — it is to borrow external evidence that supports the framework’s core argument. A framework about AI citation eligibility can legitimately reference Seer Interactive’s conversion data. A framework about entity corroboration can legitimately reference the Kevin Indig / Growth Memo citation concentration findings. The statistic does not have to be about the framework itself — it has to be evidence that the problem the framework addresses is real and quantified.

The failure mode for professional services pages is different. On a law firm SEO page or an accountancy practice page, the natural statistics are percentages — “we improved organic traffic by X%” — which are often either unavailable, confidential, or contested. Regulated sectors are particularly resistant to statistics because the professional body guidance on claims-making is conservative. The C3 fix on professional services pages is almost always to borrow an external industry statistic rather than cite internal client data.

Named example — CITATE: The Framework for AI-Citable Content: The CITATE framework page is a definition of a standard. It contains no original research. The C3 criterion was met by borrowing from two external sources that quantify the problem CITATE addresses: the Princeton/GaTech/IIT Delhi GEO-Bench finding (30–40% improvement in AI citation rates from structured content interventions), and the Seer Interactive finding (AI-cited traffic converts at 14.2% versus 2.8% for standard organic). Neither statistic is about CITATE — both are about the commercial consequence of failing the problem CITATE solves. That distinction is important: C3 does not require a statistic about the page’s specific argument. It requires a quantified, sourced number that is extractable in isolation.

The practical implication: For any page failing C3, ask two questions separately. First: is there a number with enough surrounding context — what it measures, at what scale, in what timeframe — that it could be lifted from this page and used in an AI response without misrepresentation? Second: is that number from a named, verifiable source? Both must be true. If you have a number without context, add the context. If you have context without a number, find the number. If you have both but no named source, the stat does not satisfy C3.

C4 — Named source: the inline attribution problem

The pattern: pages that have statistics often fail C4 because the source is not where the AI system can find it. A footnote at the bottom of the page fails C4. A link that says “according to research” without naming the research organisation fails C4. A stat that says “studies show” without naming the study fails C4. The source must be in the same sentence or immediately adjacent to the number — not accessible via a click, not resolvable from context, but readable in the same extraction.

This is the criterion that creates the most editorial resistance when first applied. “According to Seer Interactive’s analysis of twelve million visits in 2025” feels like clutter to a writer who has spent years being told to keep copy clean. But the AI system is not reading for elegance. It is reading for extractability. A stat that says “AI-cited traffic converts at 14.2% (source: Seer Interactive, 2025)” is citable. A stat that says “AI-cited traffic converts significantly better than organic (see footnote 3)” is not — because the AI system will not follow the footnote.

The related failure mode is partial attribution — naming the organisation but not the study, or naming the study but not the year. “According to Ahrefs” is partial. “According to Ahrefs tracking data published in February 2026” is complete. The year matters because it gives the AI system a signal about the recency and relevance of the evidence.

Named example — AI Citation Dominance: The original version of this page referenced the Kevin Indig / Growth Memo citation concentration findings without the named source inline — the stat (top 30 domains capture 67% of AI citations per topic) appeared in the body content but the attribution was a link rather than named text. The CITATE audit moved the attribution inline: “A March 2026 analysis of 21,482 ChatGPT citation rows by Kevin Indig (Growth Memo) found that just 30 domains capture 67% of all citations in any given topic.” The source name, study type, dataset size, and date are all in the same sentence as the number. C4 moved from fail to pass.

The practical implication: For every statistic on a page you’re auditing, cover everything except the sentence containing the number and ask: can I identify who produced this finding, what they studied, and approximately when, from this sentence alone? If not, C4 is failing. The fix is almost always to move attribution from a link or footnote into the same sentence as the number.

C5 — Named entity: the collective voice problem

The pattern: pages written in the first person plural — “we”, “our team”, “our approach” — without naming the person or organisation responsible fail C5. The collective voice is a convention in agency and consultancy writing. It implies scale, implies team, implies institutional weight. But it is invisible to AI systems evaluating citation eligibility. “We’ve been doing this since 2005” is not citable. “Sean Mullins, founder of SEO Strategy Ltd, has been doing this since 2005” is.

C5 is also where B2B content most commonly fails at the category level, not just the individual page level. A sector landing page that talks about “our law firm SEO work” without naming the specific clients, cases, or practitioner responsible has no C5 anchor. An AI system cannot recommend an unnamed firm with confidence — it can reference the category of work, but it cannot name the provider.

The important nuance: C5 is the prerequisite for recommendation, not just citation. A page can be cited anonymously — “according to one consultancy” — with weak C5. Named recommendation — “according to Sean Mullins, founder of SEO Strategy Ltd” — requires C5 to be explicit and specific. This is why C5 is placed in the Identity layer alongside C6: both criteria are about what AI systems need to name you, not merely reference you.

Named example — Law Firm SEO: The original law firm SEO page used “we” and “our” throughout — appropriate for a service page, natural in context, invisible to AI systems. The CITATE audit added explicit named attribution at two points: Sean Mullins named as the sole practitioner in the opening, with the specific claim that the strategic and technical work are never separated across an agency hierarchy. Olliers Solicitors named as a specific named client with a specific named outcome. Both additions are now present in the body text, not only in metadata. C5 moved from partial to pass.

The practical implication: Read your page for every instance of “we”, “our”, “the team”. For each one, ask whether an AI system reading that sentence could identify the named entity responsible for the claim. If the answer is no, C5 is failing at that point. The fix is not to remove the first person plural — it is to anchor it with a named entity at least once in the opening and once in any section containing a specific claim or case study.

C6 — Attributable claim: the hedging problem

The pattern: C6 fails most often on pages written by experienced practitioners who have learned, correctly, to qualify their statements. “Results may vary.” “This depends on your specific situation.” “It’s important to consider multiple factors.” These are honest qualifications. They are also the death of attributable claims. A statement that hedges all its edges cannot be quoted in isolation — because the isolated quote would be misleading without the qualifications.

The tension is real. The qualification is usually there for a good reason — SEO outcomes do vary, context does matter, multiple factors are involved. The resolution is not to remove the qualification — it is to make the core claim specific enough to survive extraction before adding the qualification. “Most businesses never cross the AI Visibility Ceiling — the threshold between being topically visible and being named as a recommended provider — because they optimise Stages 1 through 3 of the AI Provider Selection Pipeline while leaving Stages 4 and 5 entirely unaddressed.” That is specific, it is defensible, it can be attributed to Sean Mullins, and it can be quoted without the surrounding context distorting its meaning. The qualification can follow.

C6 also fails on pages that confuse observations with claims. “AI is changing how people find businesses” is an observation — it is true, it is too broad to attribute to anyone specifically, and it adds nothing to the AI system’s understanding of who produced this page. “The businesses that appear in AI-generated answers convert at five times the rate of businesses appearing only in organic results” is a claim — specific, falsifiable, attributable to a named source, quotable in isolation.

Named example — WebMCP: The Fourth Floor Is Being Built: The original closing paragraph of the WebMCP guide contained the claim “the businesses that win will not be the ones with the most content — they will be the ones easiest for AI systems to trust, select, and act with.” Strong instinct, but not fully attributable — no named author, no named framework context. The CITATE revision anchored it: “WebMCP is not a distribution story or a technology story — it is a control point story. Google’s deployment of the Google-Agent user agent in March 2026 confirms this: agentic evaluation of your content is already happening and is observable in server logs today.” The claim is now specific (names Google-Agent, names a date, names an observable phenomenon), it is attributed to Sean Mullins via the surrounding author block, and it can be quoted without distortion. C6 moved from partial to pass.

The practical implication: For any page you’re auditing for C6, find the most specific, defensible position the page takes. Write it as a single sentence. Test it by asking: could this sentence appear in a third-party article or AI response as a quote attributed to the named entity on this page, without the surrounding context being needed to make it accurate? If the answer requires qualifications to be accurate, add the qualifications to the sentence itself before extracting it. Then check whether the resulting sentence is still specific enough to be worth quoting. If yes, you have C6.

What this means for applying CITATE to your own pages

The six patterns above are not exhaustive — they describe the most common failure mode for each criterion, not the only one. A technical SEO audit page will fail C1 differently from a location page. A healthcare IT case study will fail C4 differently from a framework definition page. The criterion is constant. The context in which it fails is specific to the page type, the audience, and the sector.

What remains consistent is the diagnostic sequence: audit each criterion independently, identify the specific failure mode for your page type, and fix the criterion without disrupting the criteria that already pass. C1 and C2 fixes often require rewriting the opening and adding definitions — structural changes that do not affect C3–C6. C3 and C4 fixes often require finding or adding a named-source statistic — an evidence change that does not affect C1, C2, C5, or C6. C5 and C6 fixes are often a single sentence addition that anchors the named entity and the specific claim — identity changes that leave the structure and evidence layers intact.

The full criteria, with precise definitions of what each requires and what it does not, are at CITATE: The Framework for AI-Citable Content. The page-by-page audit tool that applies these criteria to existing content is at AI Citation Checklist. For the consultancy that designs and builds CITATE-standard content architecture across a site, see LLM Optimisation services.

Key Definitions

CITATE threshold
The CITATE threshold is the point at which a web page becomes extractable, evidenced, and attributable enough for AI systems to cite it with confidence — meaning the AI system can extract a specific answer, attribute the extraction to a named source, and reuse it in a generated response without risk of misrepresentation.
Failure mode
A failure mode in the CITATE context is the specific way a criterion fails for a given page type — distinct from the criterion itself. C1 has a different failure mode on a location page (empathy opening) than on a case study page (narrative opening that builds to the answer rather than leading with it). Understanding the failure mode for your page type is what makes the audit efficient.

How to diagnose CITATE failure modes for your page type

  1. 1

    Identify your page type before starting the audit

    Before checking any individual criterion, identify which page type you are auditing: location/service page, framework definition page, professional services page, case study, or tool/resource page. Each page type has predictable failure modes. Knowing the type tells you which criteria to check first and what the fix is likely to involve.

  2. 2

    Test C1 by covering everything except the first paragraph

    Cover the rest of the page and read only the opening paragraph. Ask: does this paragraph answer the implied query someone might ask ChatGPT about this page? If it requires context from the rest of the page to make sense, C1 is failing. For location pages, the fix is almost always to replace the empathy opening with a named-entity answer. For framework pages, the opening definition usually passes without changes.

  3. 3

    List every technical term and check for inline definitions

    Write down every term on the page that a non-specialist might not know. For each one, search the page for a sentence of the form "X is Y". If no such sentence exists, C2 is failing for that term. The fix is to add the definition inline at or near the first use — not in a glossary, not as a tooltip, not as a link to another page.

  4. 4

    Find your stat and check the attribution is in the same sentence

    Identify the main quantified claim on the page. Cover everything except the sentence containing that number. Ask: can I identify who produced this finding, what they studied, and approximately when, from this sentence alone? If not, C3 or C4 is failing. Move the attribution inline if it is currently in a link or footnote.

  5. 5

    Check that a named entity appears in the body text, not only in metadata

    Search the page body for the name of the person or organisation responsible for the content. If it appears only in a byline, footer, or meta field — but not in the body text — C5 is failing. Add a sentence that names the entity explicitly and connects them to a specific claim or outcome on the page.

  6. 6

    Write one sentence that could be quoted in isolation

    Identify the most specific, defensible position the page takes. Write it as a single sentence. Test it: could this sentence appear in a third-party article as a quote attributed to the named entity on this page, without the surrounding context being needed to make it accurate? If it requires qualifications to be accurate, add those qualifications to the sentence itself. If the resulting sentence is still specific and quotable, C6 is satisfied.

Frequently Asked Questions

Why does the same criterion fail differently on different page types?

Because the failure mode is determined by the purpose of the page, not the criterion itself. C1 requires a standalone opening answer. A location page fails C1 because it is written to persuade — the opening is an empathy statement, not an answer. A case study page fails C1 because it opens with narrative — building context before the finding. A framework page usually passes C1 because it opens with a definition. The criterion is the same. The page type determines which failure mode applies.

Can a page reach 6/6 without changing the core content?

Sometimes — particularly for C4 and C5. C4 often fails because a named source is in a link rather than inline text. Moving the source name into the sentence rather than the anchor text satisfies C4 without changing the content substantively. C5 often fails because the author name is in a byline but not in the body text — adding a single sentence that names the author and their role satisfies C5. C1, C2, C3, and C6 usually require content changes, not just formatting changes.

Is there a page type where CITATE is hardest to apply?

Regulated professional services pages — law firms, medical practices, financial advisors — are consistently the hardest. C3 requires a specific quantified claim with named source: regulated sectors resist making specific claims because professional body guidance on advertising and claims-making is conservative. The resolution is almost always to borrow an external industry statistic rather than cite internal client data, and to frame it as evidence about the sector rather than about the specific firm's performance.

How does CITATE apply to pages that are primarily visual or tool-based?

CITATE applies to the text content of any page that contains text. A calculator or diagnostic tool page that opens with a standalone answer explaining what the tool does and why (C1), defines the key terms the tool uses (C2), and includes a named-source statistic about the problem the tool addresses (C3–C4) can reach 6/6. The tool itself is not evaluated by CITATE — the page content surrounding and describing the tool is.

What is the most common reason a page that looks citation-ready actually fails CITATE?

C6 — the attributable claim criterion. Pages that look citation-ready typically have standalone openings, definitions, and statistics. What they almost universally lack is a single sentence specific enough to be quoted in isolation and attributed to a named source. The content is informative and well-evidenced, but every position is hedged or expressed as an observation rather than a claim. The fix is to identify the most specific, defensible position the page takes and write it as a single sentence that could appear in a third-party article as a direct quote.

Sean Mullins

Founder of SEO Strategy Ltd with 20+ years in SEO, web development and digital marketing. Specialising in healthcare IT, legal services and SaaS — from technical audits to AI-assisted development.

Ready to improve your search visibility?

Book a free 30-minute consultation and let's discuss your SEO strategy.

Get in Touch