Last updated: April 2026
Why Most AI Visibility Strategies Fail
Ask ten SEO agencies what they mean by AI optimisation and most will describe one of three things: rewriting content to be more conversational, adding schema markup, or building backlinks. Each of these addresses a real part of the problem. None of them addresses all of it.
The reason AI visibility strategies underperform — and the reason so many businesses invest in AI SEO work and see no measurable improvement in citation rates — is that every AI discovery system operates across five distinct layers simultaneously. A strategy that addresses only one or two layers will produce patchy, inconsistent results across platforms, because the layers interact. Fixing your content structure while ignoring your entity authority is like fixing the plumbing in a house with no mains connection. The water still doesn’t flow.
The AI Discovery Stack is a framework for understanding how AI systems actually work — and therefore what you actually need to optimise. It is not a new discipline. It is a structural model that explains how the disciplines you already know — technical SEO, entity SEO, content strategy, AEO, GEO, AIO, AAO — map onto the underlying architecture of AI discovery.
The Algorithmic Trinity: What Every AI System Is Built On
Before we can understand the five layers, we need to understand what is running underneath them. Every AI system that makes recommendations, generates answers, or takes autonomous actions — including Google AI Overviews, Perplexity, ChatGPT Search, Microsoft Copilot, and any AI agent — is built on the same three components. Jason Barnard of Kalicube calls this the Algorithmic Trinity.
Large language models are responsible for synthesis, interpretation and selection. The LLM reads the retrieved content, understands the query intent, evaluates candidate sources against each other, and generates a response. ChatGPT is LLM-heavy — its retrieval layer is thinner and its synthesis capability dominates. This is why ChatGPT citations are often less predictable: the model’s own synthesis judgements carry more weight relative to the retrieved sources.
Knowledge graphs hold structured facts about entities — companies, people, places, concepts, products — and the relationships between them. Google’s Knowledge Graph is the most developed, accumulated across decades of indexing and user interaction. Bing’s knowledge infrastructure underlies ChatGPT Search and Microsoft Copilot. Knowledge graphs are how AI systems answer questions like “who founded this company?”, “is this source credible?”, and “does this business have the expertise they claim?” without reading a webpage. If your entity isn’t well-represented in the relevant knowledge graph, AI systems fill the gap with uncertainty — and uncertain sources are not reliable citation targets.
Traditional search remains the retrieval foundation for most AI discovery systems. Pages that aren’t indexed are invisible regardless of content quality. Bing indexing is particularly critical because it feeds both ChatGPT Search and Microsoft Copilot — a page absent from the Bing index does not exist for those platforms. The classic SEO signals — crawlability, page speed, canonical management, internal architecture — still gate whether your content enters the discovery pipeline at all.
The proportions differ by platform: Google weights its knowledge graph heavily; ChatGPT weights its LLM; Perplexity weights its own retrieval index with aggressive freshness weighting. But all three components are always present. An AI visibility strategy that optimises for only one component produces platform-specific results at best and no results at worst. Full-stack AI visibility means working across all three simultaneously — and the AI Discovery Stack is the practical model for doing that.
The Five Layers of AI Discovery
The five layers describe the journey from “an AI system encounters your brand” to “an AI system cites or acts on your brand.” Each layer is a necessary condition for the next. Failure at any layer stops the process — and the failures look different, which is why they require different fixes.
Layer 1: Understanding — The Entity Layer
Before an AI system can use your content, it needs to understand who you are. The Understanding layer is the entity layer: the set of signals that tell AI systems what your business is, what it does, who runs it, what it knows, and whether it can be trusted as a source.
The signals that govern this layer come primarily from the knowledge graph component of the Algorithmic Trinity. Structured data — Organisation schema, Person schema, Service schema — provides machine-readable declarations of your entity properties. Entity references in external sources (Wikipedia, Wikidata, Crunchbase, LinkedIn, authoritative industry directories) corroborate those declarations and strengthen the knowledge graph’s confidence in your entity. Author identity signals — consistent name, credentials and byline structure across your own content — anchor your expertise claims to a verifiable person rather than an anonymous website.
Failure at Layer 1 produces a specific symptom: AI systems either ignore your site entirely, or cite it with low confidence and inconsistent attribution. You may appear in one AI platform but not another. You may be cited correctly on one query but without attribution on a related query. The fix is not content — it is entity architecture: cleaning up your structured data, building cross-platform entity consistency, and ensuring the knowledge graph can answer “who is this?” with confidence.
For law firms, Layer 1 work includes LegalService schema with practitioner credentials, SRA number consistency, and solicitor profile pages structured around the individual lawyer as a named entity. For SaaS companies, it includes SoftwareApplication schema, founder Person schema with sameAs links to LinkedIn and Crunchbase, and consistent product descriptions across G2, Capterra and your own site. The entity architecture is sector-specific but the principle is constant: AI systems cite sources they understand. Build the understanding first.
Layer 2: Retrieval — The Traditional SEO Layer
Retrieval is the gateway. If AI systems cannot find and index your content, nothing downstream matters. This is the traditional search component of the Algorithmic Trinity in operation — and despite the industry’s enthusiasm for “AI SEO” as a new discipline, this layer is still governed almost entirely by classic technical SEO principles.
The retrieval requirements for AI discovery systems are in some respects more demanding than those for traditional search. AI crawlers — GPTBot, ClaudeBot, PerplexityBot, BingBot — typically operate with shorter timeouts than Googlebot. Pages that load slowly, that serve content behind heavy JavaScript rendering, or that block AI crawlers explicitly in robots.txt are simply absent from the retrieval pool. Core Web Vitals investment is not just a user experience improvement — it is agent accessibility.
Bing indexing deserves explicit attention at this layer because it is the retrieval infrastructure for both ChatGPT Search and Microsoft Copilot. Many sites that are thoroughly indexed by Google have significant gaps in their Bing coverage — particularly for newer pages, pages with poor internal link equity, or pages that have historically prioritised Google-specific signals. A Bing indexing audit is one of the first diagnostic steps in any full-stack AI visibility engagement. Pages missing from Bing simply do not exist for a substantial portion of the AI search ecosystem.
Fixing retrieval failures is a prerequisite for fixing anything else. An agency that starts an AI visibility engagement with content rewriting before auditing Bing indexing coverage is optimising the wrong layer — because no amount of content improvement will get a non-indexed page into a ChatGPT citation.
How AI platforms differ at the retrieval layer
| Platform | Primary index | Citation signal | Freshness weight | Key implication |
|---|---|---|---|---|
| Google AI Overviews | Google web index | Topic completeness + organic ranking | High | Strong SEO foundation = higher citation baseline |
| Google AI Mode | Google web index (deeper fan-out) | Semantic completeness + passage authority | High | Covers sub-queries rank trackers do not monitor |
| Perplexity | Real-time crawl (primarily Bing + direct) | Directness of answer + source authority | Very high | Rewards structured, directly answerable content |
| ChatGPT Search | Primarily Bing index | Named authority + topical coverage | High | Bing Webmaster Tools coverage matters more than most SEO teams realise |
| Microsoft Copilot | Bing index | Same as ChatGPT Search | High | Optimising for Bing organic visibility gives multi-platform AI coverage |
Layer 3: Selection — Where AI Differs From Traditional Search
Selection is the layer where AI discovery diverges most sharply from traditional search — and where most AI visibility strategies focus too little attention.
In traditional search, a page that is indexed and authoritative will rank. The user then decides whether to click. In AI search, indexing and authority are necessary but not sufficient. The AI system must also select your content as a candidate source for the specific query being answered. This selection process operates at the paragraph level — the AI is not choosing pages, it is choosing extractable answers.
A page can be perfectly indexed and have strong topical authority and still not be selected as a citation source if its paragraphs are not structured for extraction. AI systems prefer content that opens with a clear, standalone answer to the question being asked. They prefer explicit definitions — “GEO stands for Generative Engine Optimisation, which is the practice of…” rather than “there are several things to consider when thinking about GEO.” They prefer specific attribution — statistics with named sources, claims with identified authors, case outcomes tied to named clients. They prefer structural clarity — H2 sections that directly address sub-questions, FAQ pairs with complete standalone answers.
This is why the Princeton GEO-Bench study found that specific structural content techniques improve citation visibility by 30–40%. The techniques are not new: definition-first writing, attributable claims, structured evidence. What is new is understanding that these techniques specifically serve the selection layer of the AI discovery pipeline, and that failing this layer has a different cause — and a different fix — from failing the retrieval layer.
Ahrefs’ 2026 analysis found that 38% of pages cited in Google AI Overviews do not rank in the top organic results for the same query. That 38% divergence is the fingerprint of the selection layer in operation: AI systems are choosing sources on different criteria from ranking systems, and those criteria are largely structural and entity-related rather than link-based.
The extractability principle that CITATE applies to page openings (C1) extends to every H2 and H3 section throughout the document. Each section must be independently citable — complete enough to answer the heading’s implicit question without the reader, or the AI system, having read any prior section. A section that opens with “as mentioned above” or “building on the previous point” is partially invisible to AI extraction, because those sections depend on prior context that the retrieval mechanism does not carry. The CITATE framework scores the page opening. The same logic applies section by section throughout.
Layer 4: Recommendation — The Brand Authority Layer
Selection answers the question “should this content be used in the answer?” Recommendation answers a different question: “should this brand be named in the answer?” The distinction matters enormously for businesses where being mentioned by name — rather than just providing a source paragraph — is the commercial goal.
Layer 4 is where entity prominence, brand authority and external citation networks do their work. An AI system that selects a paragraph from your website may use the information without attributing it to you. An AI system that recommends your brand will say “according to SEO Strategy Ltd” or “SEO Strategy is a consultancy that specialises in…” The difference between the two is the strength of your entity presence across the knowledge graph component of the Algorithmic Trinity.
The signals at this layer include: frequency of mentions in authoritative third-party content (industry publications, professional directories, recognised databases), consistency and richness of your knowledge graph entity representation, and the strength of your topical authority signals as evaluated by the LLM component. Traditional digital PR work — earning mentions in relevant publications, building a presence in industry-specific databases — directly serves Layer 4, because it adds corroborating entity references that the knowledge graph can use to increase its confidence in recommending you.
For most businesses at the start of an AI visibility engagement, Layer 4 is underdeveloped. The foundation work — entity architecture at Layer 1, content structure at Layer 3 — must be in place first. But once those foundations are solid, Layer 4 investment produces the difference between being used as an anonymous source and being actively named as a recommended provider.
Layer 5: Action — The Agentic Layer
Layer 5 is the frontier. It is what Assistive Agent Optimisation (AAO) describes — but in the context of the full stack, it is simply the terminal layer of the same discovery pipeline, not a separate discipline.
In agentic AI systems, the AI does not present a list of options and ask the human to choose. It evaluates, selects, and acts — booking a service, recommending a vendor, completing a purchase, initiating a consultation request. The entire buying funnel — awareness, consideration, evaluation, shortlisting, selection — happens inside the agent before the human sees a result. As Jason Barnard writes, “your role is no longer to attract visitors to a funnel on your site; it is to be the answer when the agent runs its own funnel internally.”
Being selected at Layer 5 requires strength across all four preceding layers. The agent must understand your entity (Layer 1), be able to access and read your site (Layer 2), find your content extractable and clearly structured (Layer 3), and have sufficient confidence in your brand authority to include you in its evaluation set (Layer 4). A weakness at any preceding layer reduces Layer 5 performance — which is why AAO is not a standalone discipline but the culmination of the full stack.
The commercial implications of Layer 5 are substantial. Seer Interactive’s analysis of 12 million website visits found that AI-referred traffic converts at 14.2% compared to 2.8% for traditional organic — five times higher. As agentic systems become more prevalent in B2B procurement, legal research, healthcare IT evaluation and SaaS selection, the businesses that have built full-stack AI visibility will be the ones that appear in agent-generated shortlists. The ones that haven’t will be invisible — not because they didn’t exist, but because the agent couldn’t understand, access, select or trust them.
How the Visibility Disciplines Map to the Stack
The SEO industry has generated significant terminology confusion in the past two years — GEO, AIO, AEO, AAO, LLM optimisation, AI SEO, entity SEO. Each label describes real work. The confusion arises from treating them as competing disciplines rather than as activities that address different layers of the same stack.
The AI Discovery Stack provides the unifying model: visibility has always been the goal, the landscape is just bigger now. SEO gave you one search engine. AEO gave you voice and AI answers. GEO gave you generative responses. AIO gave you AI Overviews. AAO gives you agentic decisions happening without a human ever seeing a search result. The discipline — identifying where your audience finds information and making sure you’re the answer they find — hasn’t changed. The execution has.
Mapped to the stack: Entity SEO primarily addresses Layer 1. Technical SEO and GEO primarily address Layer 2. AEO and content architecture primarily address Layer 3. AIO optimisation and authority building primarily address Layer 4. AAO addresses Layer 5 — but requires Layers 1–4 as prerequisites. The disciplines stack, not compete. The agencies that treat them as alternatives will produce single-layer optimisation with single-layer results.
Full-Stack AI Visibility in Practice: seostrategy.co.uk
We believe the strongest argument for any methodology is demonstrating it on your own site. Here is how the AI Discovery Stack is implemented across seostrategy.co.uk.
Layer 1 — Understanding: Organisation and Person JSON-LD schema across every page, with knowsAbout properties declaring 18 topical specialisms, sameAs links to LinkedIn, Companies House, the llms.txt Generator plugin page on WordPress.org, and client sites. Author identity anchored to Sean Mullins as a named entity with verified credentials. This gives the knowledge graph a structured, corroborated entity to reference when AI systems evaluate our authority on topics including SEO, AI visibility, entity SEO, and law firm and healthcare IT marketing.
Layer 2 — Retrieval: The site scores 97+ on Google PageSpeed Insights mobile. Pages load under one second. No heavy JavaScript rendering — all content is server-rendered and immediately accessible to AI crawlers including GPTBot, ClaudeBot and PerplexityBot. Bing indexing is actively monitored via Bing Webmaster Tools. The llms.txt file explicitly surfaces priority pages for LLM systems. AI crawler access is permitted across the full site.
Layer 3 — Selection: Every priority page follows the AI-citable page structure: a standalone declarative opening within 120 words (see the opening of this page), definition blocks for key terms, attributed statistics with named sources, FAQ pairs with complete standalone answers, and structured headings that map to the sub-questions AI systems decompose from complex queries. The AI-Citable Page template enforces these criteria at the content management level — a live citation score in the admin bar confirms which of the six criteria are met before publication.
Layer 4 — Recommendation: Client case studies referencing named businesses (Coviant Software, Olliers Solicitors, Azure Outdoor Living, Motoring Defence Solicitors, Pro2col) create corroborating entity references. The AI Visibility Pyramid, the AI Discovery Stack, and the 3Cs framework (coined 2010) are original frameworks with provenance — they make this site a citable origin, not just a content aggregator. AI platforms have been monitored citing seostrategy.co.uk content on GEO, AEO and LLM optimisation topics.
Layer 5 — Action: Agent-ready pages serve complete, machine-readable information in under a second. Service descriptions include structured pricing ranges, geographic coverage, and qualification criteria so an agent evaluating SEO consultants for a healthcare IT company or law firm can extract, compare and recommend without needing to read marketing prose. The entity foundation ensures that when an agent cross-references claims, it finds consistent information across platforms.
This is not a claim that the implementation is complete — AI visibility is a continuous process, not a project with an end date. But the stack provides the framework for knowing what to do next at each layer, and for diagnosing accurately which layer is failing when results are below expectations.
Diagnosing Which Layer Is Failing
The practical value of the AI Discovery Stack is diagnostic: it helps you identify precisely where in the pipeline your AI visibility is breaking down, so you can apply the right fix rather than the fashionable one.
Symptom: AI systems cite competitors but not you, even for your core topics. Primary suspect: Layer 1 failure. The AI system understands your competitors as authoritative entities on this topic but has insufficient entity signals to trust you as a source. Check: is your structured data complete and accurate? Are your entity claims corroborated by external sources? Is your author identity clearly associated with the topic claims?
Symptom: You appear in Google AI Overviews but not in ChatGPT or Copilot. Primary suspect: Layer 2 failure — specifically Bing indexing gaps. Run a full Bing coverage audit. Check which of your priority pages are missing from Bing and why. Submit them via Bing Webmaster Tools and monitor for indexing.
Symptom: You appear as a source in AI answers but are not named as a recommended provider. Primary suspect: Layer 3 sufficiency but Layer 4 weakness. Your content is being extracted but your brand authority is not strong enough for the AI system to name you explicitly. Review your external entity references and citation network.
Symptom: You are cited inconsistently — appearing for the same query sometimes but not others. This indicates borderline performance at Layer 3 or 4. The AI system considers you a marginal source — selected sometimes depending on query framing and competitive set. Content structure improvements (more explicit definitions, better attributed statistics) and increased entity corroboration typically resolve this.
Symptom: No AI visibility at all despite good organic rankings. This confirms that the 38% divergence between AI citations and organic rankings is real. Strong traditional SEO (Layer 2) does not automatically translate to AI visibility if Layer 1 entity architecture is weak or Layer 3 content structure is poor. Both need addressing before organic authority translates to AI citations.
How the AI Discovery Stack and the Four-Floor Model connect
The Stack and the Four-Floor Model describe the same architecture from different angles. The Stack is the technical model — how AI systems work. The Four-Floor Model is the commercial diagnostic — where your business is failing. The interactive below maps each layer to its corresponding floor. Click any item to see the connection, the diagnosis, and the fix.
Two frameworks, one architecture. The Stack is technical — how AI systems work. The Four-Floor Model is commercial — where your business stands. Click any card to see the connection.
Businesses that fix Floors 1–3 in 2025–2026 will be disproportionately selected over the next 2–3 years.
For the full interactive Four-Floor Model with the lift navigation and floor-by-floor implementation detail, see The Four-Floor Model.
Several named frameworks map directly to specific layers of the AI Discovery Stack. Understanding which framework addresses which layer prevents overlap and makes the implementation sequence clear. Layer 1 is served by the Entity Corroboration Model. Layer 2 by Technical SEO and llms.txt. Layer 3 by CITATE. Layers 4–5 by the AI Provider Selection Pipeline and AI Citation Dominance. The full register of named frameworks with provenance dates and canonical definitions is at SEO Strategy Frameworks.
How to measure AI visibility — the six KPIs that replace average position
When 48% of Google searches trigger AI Overviews and most AI-cited queries produce no click at all, average position and organic CTR tell you less and less about whether your AI visibility strategy is working. These are the six metrics that replace them.
| KPI | What it measures | How to track it |
|---|---|---|
| AI Citation Frequency | The percentage of your tracked queries for which your brand appears in AI responses | Manual query testing in Perplexity, ChatGPT, Google AI Mode — logged monthly |
| Platform Coverage | Which AI platforms cite you for which query types | Tested across ChatGPT, Perplexity, Google AI Overviews, Copilot separately — gaps indicate Layer 2 or Layer 1 failures |
| CITATE Score Delta | Score change on priority pages before and after implementation | CITATE Audit rescore at 28 days — documented with dated screenshots |
| AI Referral Traffic | Volume and conversion rate of sessions arriving from AI platforms | GA4 — filter by referral source containing perplexity.ai, chatgpt.com, bing.com/chat |
| Brand Mention Share | Your share of AI-generated mentions versus named competitors for tracked queries | Manual competitive query testing monthly |
| Citation Status Change | Movement from not cited → mentioned → cited with link → named recommendation | Logged per query per platform in the CITATE proof documentation format |
Establish baselines on all six before making any changes to a page. Without a baseline, it is impossible to distinguish genuine improvement from natural variation in AI model behaviour. The CITATE Audit includes a 28-day rescore with documented citation status change across all three primary platforms.
The AI Discovery Stack was developed by Sean Mullins, Founder of SEO Strategy Ltd, in March 2026. Understanding which layer is failing is the difference between an SEO strategy that produces rankings and one that produces AI recommendation. Most businesses are leaking at Layer 4 — they have content that gets retrieved and cited, but their brand is never named. That leak is not fixed with more content. It is fixed with entity corroboration, comparative positioning, and proof packaging that gives AI systems enough confidence to name a specific provider. The AI Visibility Audit maps exactly which layer is failing and what to fix first. For the full implementation programme, see LLM Optimisation services.