This guide maps the structural shift in how businesses get found — from competing for ranked positions to competing for AI citation and named recommendation. If you want to know which floor is your current constraint, the AI Visibility Audit diagnoses it directly.
Start Here: Which Floor Are You Failing?
Before the model, the diagnostic. Three questions, in order. Each takes under two minutes.
Floor 1 — Can AI find you? Search site:yourdomain.com in Bing. If fewer than 70% of your commercial pages appear, ChatGPT Search and Microsoft Copilot cannot retrieve you — both platforms index from Bing. Check your robots.txt for PerplexityBot, OAI-SearchBot, and Google-Extended. A legacy catch-all disallow rule may have been blocking AI crawlers for months without any visible consequence in Google Analytics.
Floor 2 — Can AI cite you? Ask Perplexity a question your content should answer — something specific to your sector and expertise. Does your site appear in the citations? If competitors appear but you do not, and your Bing indexation is clean, the problem is content extractability: your pages are being retrieved but not selected. The content exists in the retrieval pool but is not structured for citation.
Floor 3 — Will AI name you? Ask ChatGPT “who are the best [your service] providers in [your market]?” Are you named? If your content is cited on Floor 2 but you are absent from Floor 3 shortlists, the gap is trust infrastructure: AI systems use your content as an anonymous source but do not have sufficient independent corroboration to name you as a recommended provider with confidence.
The floor where the answer first becomes “no” is where to invest. Everything above it is commercially inert until it is resolved.
Get a formal diagnosis — AI Visibility Audit →
Why This Is Bigger Than an Algorithm Update
Every major shift in search has been described as fundamental and then resolved into an incremental change. Panda, Penguin, Hummingbird, BERT — each shifted the dial on how Google evaluated existing signals. The signals themselves — links, content, authority — remained the same. The mechanism for discovery did not change.
What is happening now is different in kind. Google CEO Sundar Pichai has described Search evolving into an “agent manager” — a system that coordinates long-running, multi-step tasks rather than returning links — and named 2027 as the inflection point, with $175–185 billion being spent in 2026 to build that infrastructure. Shane Legg, co-founder of Google DeepMind and the researcher credited with coining the term AGI, gives a 50% probability of minimal AGI by 2028. These are not fringe forecasts. They come from the people building the systems that businesses depend on for discovery.
The practical commercial implication does not require buying either timeline in full. It requires only acknowledging what is already observable: search systems are becoming answer engines, research agents, and recommendation layers. Agentic AI is searched 135,000 times per month. WebMCP grew 6,400% in three months. LLM optimisation is up 240% year-on-year. The vocabulary is forming now because the behaviour is changing now.
The clearest single illustration of what the agent-manager model looks like in practice arrived in April 2026. Google rolled out agentic restaurant booking in Search globally — including the UK — allowing users to describe their requirements and have AI agents scan multiple booking platforms simultaneously to find and reserve in real time. Google’s Search Product Lead Rose Yao described it as task completion, not search: no app-switching, no hassle, just the outcome. That is not a preview of what search is becoming. It is deployed infrastructure, live now, on one of the most commercially saturated local query types that exists. The restaurants that benefit are the ones whose availability data can be queried by an agent in real time. The ones that cannot are not ranked lower — they are absent from the answer entirely.
For businesses, this changes the central question. Not “where do we rank?” but “can AI systems find us, extract from us, trust us, and name us?” Those are four separate questions answered at four separate floors. Most businesses have never asked the last three.
The Three-Floor Model
The three-floor model is a diagnostic framework, not a checklist. Each floor is a dependency for the one above it. A business that fails Floor 1 cannot benefit from Floor 2 work. A business that fails Floor 2 cannot benefit from Floor 3 investment. The sequence is fixed.
Floor 1 — Retrieval Eligibility
The access layer. Before any AI platform can evaluate whether your content is worth citing, it must be able to retrieve it. Google AI Overviews retrieves from Google’s index. ChatGPT Search and Microsoft Copilot retrieve from Bing. Perplexity uses its own crawler — PerplexityBot — which must be explicitly permitted. Claude uses ClaudeBot. Each has separate access requirements that most technical SEO audits have never checked.
The most common Floor 1 failure is invisible: a technical audit from 18 months ago added a NOARCHIVE directive to commercial pages to solve a different problem. Result: completely invisible to Microsoft Copilot — the highest-intent B2B discovery channel in the AI landscape, embedded across 400 million Microsoft 365 seats — regardless of content quality or Bing indexation. One directive, months of invisible damage.
Floor 1 is mostly audit and fix work. It does not require new content, new technology, or significant ongoing investment. A Floor 1 audit takes days. Remediation takes weeks. For most businesses, it is the highest-leverage, lowest-cost work available right now — because failures here make everything above them irrelevant.
Floor 2 — Content Extractability
The extraction layer. AI systems do not read pages the way humans do. They retrieve at paragraph and section level, extracting fragments out of sequence from pages they have already decided to consult. Content written for human readers reading linearly — context-dependent paragraphs, unnamed entities, qualitative claims without numbers — is retrieved but not cited. The content exists in the retrieval pool but is structurally opaque to AI extraction.
The CITATE framework defines the six structural criteria that determine whether content crosses the extraction threshold: a standalone opening answer (C1), explicit definitions (C2), statistics with full context (C3), named sources inline (C4), a named entity (C5), and an attributable claim (C6). Research from Princeton, Georgia Tech and IIT Delhi found structured content interventions consistent with these criteria improved AI citation rates by 30–40% in controlled testing — the largest effect size measured in the GEO-Bench evaluation.
The fix is editorial, not technical. Rewriting opening paragraphs, adding inline definitions, adding named-source statistics, naming the author in body text. Per page, this is hours. Across a site, it is a structured programme that compounds into traditional SEO performance simultaneously — the same structural clarity that makes content citable by AI also makes it more likely to earn featured snippets, rank for long-tail queries, and satisfy user intent.
Floor 3 — Recommendation Eligibility
The trust layer. This is where AI recommendation is actually determined — and it is entirely outside your own content. Muck Rack’s Generative Pulse analysis of over one million AI response links from ChatGPT, Claude, Gemini and Perplexity found that 82% of all AI citations come from earned media. In consumer electronics, AI cites third-party authoritative sources 92.1% of the time, according to a University of Toronto analysis of 13 industries in September 2025. The brand’s own content accounts for less than 8%.
AI systems do not select businesses based on what those businesses say about themselves. They select based on what independent sources say: editorial coverage in publications you did not write, review platform profiles with genuine volume and recency, structured entity databases, named professional credentials. The selection mechanism is corroboration, not assertion. A business that has only ever declared its own expertise is invisible at Floor 3 regardless of how good its website is.
Floor 3 is measured in months, not days. For a business starting from a weak Floor 3 position, six to twelve months of sustained effort is the realistic timeline before meaningful AI recommendation visibility emerges. The businesses that start now will be in a structurally stronger position in twelve months than those waiting for the shift to feel more urgent. For sector-specific Floor 3 strategy: law firms, SaaS and enterprise software, local business.
What This Means by Sector
| Sector | Most urgent floor | The specific risk | The first action |
|---|---|---|---|
| Law firms | Floor 3 | AI names competitors with Chambers/Legal 500 recognition while citing your content anonymously | Claim legal directory profiles; add LegalService schema with practitioner credentials and SRA numbers |
| SaaS / enterprise software | Floor 3 | Procurement agents query G2 and Clutch directly (GPT-5.4 uses site: operators targeting both); absence from these platforms is an AI visibility failure | Complete G2 and Clutch profiles; build ROI calculators and comparison pages with named, attributable data |
| B2B professional services | Floor 2 | Content ranks but never appears in AI answers because sections are context-dependent and entity-anonymous | Apply CITATE C3/C4 to ten priority pages; ensure named entity (C5) appears in body text not just byline |
| Local business / SME | Floor 1 | AI crawlers blocked by legacy robots.txt; Bing absent; GBP unverified — invisible to AI before content is ever evaluated | Check robots.txt for AI crawlers; verify Bing indexation; complete and verify Google Business Profile |
| Enterprise | Floor 3 | Wikidata absent; entity corroboration insufficient for AI systems to name the business with confidence on high-stakes queries | Build Wikidata entity; systematic editorial coverage programme targeting the 20 publications AI actually cites in your category |
The Measurement Problem Nobody Has Solved — Until Now
The most-liked post in the SEO industry right now asks a pointed question: “Describe what a GEO or AI SEO specialist does that does not include building links, optimising content, or technical SEO.” 241 comments. No satisfactory answer.
The question is the right one. The answer it did not receive: the distinction between SEO and AI visibility work is not in a list of different tactics — many of the inputs overlap. The distinction is in the outcome being measured and the criteria that determine that outcome. SEO optimises for rankings and traffic. AI visibility work optimises for retrieval, extraction, attribution, and named recommendation. These are different outcomes determined by different signals, and the signals can be scored.
The CITATE framework — six criteria across three layers (Structure C1–C2, Evidence C3–C4, Identity C5–C6) — is the scored, reproducible, auditable standard for Floor 2 extractability. It is in production across 30+ pages on this site, enforced through a custom WordPress template that scores each page 1–6 in the admin bar and is filed as UK Trademark UK00004359244. It answers the question nobody in that thread answered: here is what AI visibility work does that is distinct from SEO, here is how you score it, and here is what a page looks like before and after.
Priority Actions by Floor
| Action | Floor | Timeline | Why it matters |
|---|---|---|---|
| Check robots.txt for PerplexityBot, OAI-SearchBot, Google-Extended, ClaudeBot, Google-Agent | Floor 1 | This week | Unblocking PerplexityBot produces citation activity within days of next crawl. Google-Agent is already visible in server logs — blocking it removes agentic evaluation entirely. |
Run site:yourdomain.com on Bing; submit sitemap to Bing Webmaster Tools | Floor 1 | This week | Bing powers ChatGPT Search and Microsoft Copilot. Pages absent from Bing are categorically ineligible for citation on both platforms regardless of content quality. |
| Add llms.txt to site root | Floor 1 | One week | Explicit AI access guidance at inference time. Not universally adopted yet — early adoption still confers a structural advantage over sites leaving it to crawl heuristics. |
| Run CITATE C1–C6 audit on ten priority commercial pages | Floor 2 | Two weeks | In a typical audit, 60–70% of H2 sections fail C3 (statistic with full context). Fixing C3 is the highest-impact single editorial change available. GEO-Bench: 41% citation rate improvement in controlled testing. |
| Add Person schema with named entity to every service or product page | Floor 2 | Two weeks | Person schema drives 2.3× higher citation rates — Onely, 2026. Named entities are the prerequisite for recommendation, not just citation. Without a named entity, AI can extract content but cannot recommend a provider. |
| Verify Google Business Profile; audit NAP consistency across top ten directory mentions | Floor 3 | Two weeks | Inconsistent entity signals reduce AI confidence in entity identity. Floor 3 requires consistent, independently corroborated signals — not variety. |
| Create Wikidata entity (if eligible) | Floor 3 | One month | Wikidata feeds Google’s Knowledge Graph and is referenced by multiple AI systems as a trust anchor. Absence does not prevent citation — presence accelerates entity recognition across all AI platforms simultaneously. |
| Complete sector-specific authority platform profiles (G2/Clutch for SaaS; legal directories for law firms; trade body listings for professional services) | Floor 3 | One to three months | ChatGPT Search uses site: operators targeting G2 and Clutch specifically. These are no longer review sites. They are entity corroboration infrastructure that AI systems actively query when evaluating vendor recommendations. |
| Build an original data asset (calculator, benchmark, case study with named outcomes) | Floor 3 | One to three months | Original data earns citations that no competitor can replicate. The MDS drink driving calculator ranks for competitive terms and drives qualified leads because it provides verifiable, citable outputs that no self-promotional content can match. |
The AI Visibility Audit maps each of these floors against your specific site, identifies which floor is the primary constraint, and produces a sequenced action plan. It is the starting point for every AI visibility engagement.