Complete Guide

From Rankings to Recommendations: The Three-Floor Model for AI Visibility

AI-mediated discovery is already changing how businesses get found. The question is no longer just where you rank — it is whether AI systems can find you, extract from you, and name you as a recommended provider. The three-floor model explains why those are three separate problems requiring three separate solutions.

23 min read 4,684 words Updated Apr 2026

The shift from rankings to recommendations is not a future scenario. It is the present operating condition of search. When a procurement manager asks Perplexity to identify the best-fit managed file transfer solution for their healthcare IT environment, when a marketing director asks ChatGPT Search which SEO agencies specialise in SaaS, when a law firm client asks Copilot to find a criminal defence solicitor in Manchester — an AI system generates a response that either names your business or does not. There is no position three. There is no page two. There is cited or invisible, and the infrastructure that determines which outcome you get is not your ranking. It is your floors.

62% of Google AI Overview citations now come from pages that do not rank in the organic top 10 for the same query — down from 76% organic overlap just seven months earlier. The assumption that strong rankings guarantee AI citation is demonstrably broken. Citation and ranking are different outcomes determined by different signals. Ahrefs analysis of 863,000 keywords, 2026
14.2% vs 2.8% conversion rate for traffic arriving via AI citation compared to standard organic search — a five-fold difference across twelve million visits. The gap exists because a visitor arriving via AI citation has already received a recommendation. They are not browsing. They are following through on a trusted source. Seer Interactive analysis of 12 million website visits, 2025
82% of all AI citations come from earned media — independent editorial, not brand-owned content — across over one million AI response links from ChatGPT, Claude, Gemini and Perplexity analysed between July and December 2025. Floor 3 is where AI recommendation is actually determined. Muck Rack Generative Pulse, December 2025, 1M+ links
30–40% improvement in AI citation rates from structured content interventions consistent with CITATE criteria — the largest effect size measured in the GEO-Bench evaluation of generative engine optimisation techniques across 10,000 AI-generated responses. Princeton University, Georgia Tech & IIT Delhi — GEO-Bench, 2024

This guide maps the structural shift in how businesses get found — from competing for ranked positions to competing for AI citation and named recommendation. If you want to know which floor is your current constraint, the AI Visibility Audit diagnoses it directly.

Start Here: Which Floor Are You Failing?

Before the model, the diagnostic. Three questions, in order. Each takes under two minutes.

Floor 1 — Can AI find you? Search site:yourdomain.com in Bing. If fewer than 70% of your commercial pages appear, ChatGPT Search and Microsoft Copilot cannot retrieve you — both platforms index from Bing. Check your robots.txt for PerplexityBot, OAI-SearchBot, and Google-Extended. A legacy catch-all disallow rule may have been blocking AI crawlers for months without any visible consequence in Google Analytics.

Floor 2 — Can AI cite you? Ask Perplexity a question your content should answer — something specific to your sector and expertise. Does your site appear in the citations? If competitors appear but you do not, and your Bing indexation is clean, the problem is content extractability: your pages are being retrieved but not selected. The content exists in the retrieval pool but is not structured for citation.

Floor 3 — Will AI name you? Ask ChatGPT “who are the best [your service] providers in [your market]?” Are you named? If your content is cited on Floor 2 but you are absent from Floor 3 shortlists, the gap is trust infrastructure: AI systems use your content as an anonymous source but do not have sufficient independent corroboration to name you as a recommended provider with confidence.

The floor where the answer first becomes “no” is where to invest. Everything above it is commercially inert until it is resolved.

Get a formal diagnosis — AI Visibility Audit →

Why This Is Bigger Than an Algorithm Update

Every major shift in search has been described as fundamental and then resolved into an incremental change. Panda, Penguin, Hummingbird, BERT — each shifted the dial on how Google evaluated existing signals. The signals themselves — links, content, authority — remained the same. The mechanism for discovery did not change.

What is happening now is different in kind. Google CEO Sundar Pichai has described Search evolving into an “agent manager” — a system that coordinates long-running, multi-step tasks rather than returning links — and named 2027 as the inflection point, with $175–185 billion being spent in 2026 to build that infrastructure. Shane Legg, co-founder of Google DeepMind and the researcher credited with coining the term AGI, gives a 50% probability of minimal AGI by 2028. These are not fringe forecasts. They come from the people building the systems that businesses depend on for discovery.

The practical commercial implication does not require buying either timeline in full. It requires only acknowledging what is already observable: search systems are becoming answer engines, research agents, and recommendation layers. Agentic AI is searched 135,000 times per month. WebMCP grew 6,400% in three months. LLM optimisation is up 240% year-on-year. The vocabulary is forming now because the behaviour is changing now.

The clearest single illustration of what the agent-manager model looks like in practice arrived in April 2026. Google rolled out agentic restaurant booking in Search globally — including the UK — allowing users to describe their requirements and have AI agents scan multiple booking platforms simultaneously to find and reserve in real time. Google’s Search Product Lead Rose Yao described it as task completion, not search: no app-switching, no hassle, just the outcome. That is not a preview of what search is becoming. It is deployed infrastructure, live now, on one of the most commercially saturated local query types that exists. The restaurants that benefit are the ones whose availability data can be queried by an agent in real time. The ones that cannot are not ranked lower — they are absent from the answer entirely.

For businesses, this changes the central question. Not “where do we rank?” but “can AI systems find us, extract from us, trust us, and name us?” Those are four separate questions answered at four separate floors. Most businesses have never asked the last three.

The Three-Floor Model

The three-floor model is a diagnostic framework, not a checklist. Each floor is a dependency for the one above it. A business that fails Floor 1 cannot benefit from Floor 2 work. A business that fails Floor 2 cannot benefit from Floor 3 investment. The sequence is fixed.

The Four-Floor Model — AI Recommendation Stack
AI does not rank businesses. It selects them.
Click a floor or drag the lift to explore each level.
Where most businesses are: The majority of businesses that come through an AI visibility audit are failing at Floor 2 or Floor 3 — not because they lack content, but because their content is not structured for selection and their trust signals are not independently corroborated. Floor 1 failures are common and invisible. Very few businesses are genuinely ready for Floor 4.
Floor 4 — Agentic Execution · 2026–2027
MCP · WebMCP · Callable Tools · Governance Layer
Future layer — click to explore
Floor 3 — Trust & Selection
AI Recommendation Eligibility · CITATE · Entity Corroboration · Citation
AI systems have enough trust to name and recommend you — not just retrieve you
Floor 2 — Content Extractability
Structured Data · Schema Markup · Machine-Readable Answers · AI-Citable Format
AI retrieval systems can parse, extract, and quote your content accurately
Floor 1 — Entity Foundation & Discovery · Start here
NAP Consistency · Bing Indexability · Wikidata · llms.txt · Technical SEO
AI systems can find and correctly identify your entity before any recommendation is possible. Nothing above works without this.
Lift shaft
Floor 1 fail
You are invisible to AI systems
Floor 2 fail
You are retrieved but not cited
Floor 3 fail
You are cited but not recommended
Floor 4 — future
You cannot be actioned

Floor 1 — Retrieval Eligibility

The access layer. Before any AI platform can evaluate whether your content is worth citing, it must be able to retrieve it. Google AI Overviews retrieves from Google’s index. ChatGPT Search and Microsoft Copilot retrieve from Bing. Perplexity uses its own crawler — PerplexityBot — which must be explicitly permitted. Claude uses ClaudeBot. Each has separate access requirements that most technical SEO audits have never checked.

The most common Floor 1 failure is invisible: a technical audit from 18 months ago added a NOARCHIVE directive to commercial pages to solve a different problem. Result: completely invisible to Microsoft Copilot — the highest-intent B2B discovery channel in the AI landscape, embedded across 400 million Microsoft 365 seats — regardless of content quality or Bing indexation. One directive, months of invisible damage.

Floor 1 is mostly audit and fix work. It does not require new content, new technology, or significant ongoing investment. A Floor 1 audit takes days. Remediation takes weeks. For most businesses, it is the highest-leverage, lowest-cost work available right now — because failures here make everything above them irrelevant.

Floor 2 — Content Extractability

The extraction layer. AI systems do not read pages the way humans do. They retrieve at paragraph and section level, extracting fragments out of sequence from pages they have already decided to consult. Content written for human readers reading linearly — context-dependent paragraphs, unnamed entities, qualitative claims without numbers — is retrieved but not cited. The content exists in the retrieval pool but is structurally opaque to AI extraction.

The CITATE framework defines the six structural criteria that determine whether content crosses the extraction threshold: a standalone opening answer (C1), explicit definitions (C2), statistics with full context (C3), named sources inline (C4), a named entity (C5), and an attributable claim (C6). Research from Princeton, Georgia Tech and IIT Delhi found structured content interventions consistent with these criteria improved AI citation rates by 30–40% in controlled testing — the largest effect size measured in the GEO-Bench evaluation.

The fix is editorial, not technical. Rewriting opening paragraphs, adding inline definitions, adding named-source statistics, naming the author in body text. Per page, this is hours. Across a site, it is a structured programme that compounds into traditional SEO performance simultaneously — the same structural clarity that makes content citable by AI also makes it more likely to earn featured snippets, rank for long-tail queries, and satisfy user intent.

Floor 3 — Recommendation Eligibility

The trust layer. This is where AI recommendation is actually determined — and it is entirely outside your own content. Muck Rack’s Generative Pulse analysis of over one million AI response links from ChatGPT, Claude, Gemini and Perplexity found that 82% of all AI citations come from earned media. In consumer electronics, AI cites third-party authoritative sources 92.1% of the time, according to a University of Toronto analysis of 13 industries in September 2025. The brand’s own content accounts for less than 8%.

AI systems do not select businesses based on what those businesses say about themselves. They select based on what independent sources say: editorial coverage in publications you did not write, review platform profiles with genuine volume and recency, structured entity databases, named professional credentials. The selection mechanism is corroboration, not assertion. A business that has only ever declared its own expertise is invisible at Floor 3 regardless of how good its website is.

Floor 3 is measured in months, not days. For a business starting from a weak Floor 3 position, six to twelve months of sustained effort is the realistic timeline before meaningful AI recommendation visibility emerges. The businesses that start now will be in a structurally stronger position in twelve months than those waiting for the shift to feel more urgent. For sector-specific Floor 3 strategy: law firms, SaaS and enterprise software, local business.

What This Means by Sector

SectorMost urgent floorThe specific riskThe first action
Law firmsFloor 3AI names competitors with Chambers/Legal 500 recognition while citing your content anonymouslyClaim legal directory profiles; add LegalService schema with practitioner credentials and SRA numbers
SaaS / enterprise softwareFloor 3Procurement agents query G2 and Clutch directly (GPT-5.4 uses site: operators targeting both); absence from these platforms is an AI visibility failureComplete G2 and Clutch profiles; build ROI calculators and comparison pages with named, attributable data
B2B professional servicesFloor 2Content ranks but never appears in AI answers because sections are context-dependent and entity-anonymousApply CITATE C3/C4 to ten priority pages; ensure named entity (C5) appears in body text not just byline
Local business / SMEFloor 1AI crawlers blocked by legacy robots.txt; Bing absent; GBP unverified — invisible to AI before content is ever evaluatedCheck robots.txt for AI crawlers; verify Bing indexation; complete and verify Google Business Profile
EnterpriseFloor 3Wikidata absent; entity corroboration insufficient for AI systems to name the business with confidence on high-stakes queriesBuild Wikidata entity; systematic editorial coverage programme targeting the 20 publications AI actually cites in your category

The Measurement Problem Nobody Has Solved — Until Now

The most-liked post in the SEO industry right now asks a pointed question: “Describe what a GEO or AI SEO specialist does that does not include building links, optimising content, or technical SEO.” 241 comments. No satisfactory answer.

The question is the right one. The answer it did not receive: the distinction between SEO and AI visibility work is not in a list of different tactics — many of the inputs overlap. The distinction is in the outcome being measured and the criteria that determine that outcome. SEO optimises for rankings and traffic. AI visibility work optimises for retrieval, extraction, attribution, and named recommendation. These are different outcomes determined by different signals, and the signals can be scored.

The CITATE framework — six criteria across three layers (Structure C1–C2, Evidence C3–C4, Identity C5–C6) — is the scored, reproducible, auditable standard for Floor 2 extractability. It is in production across 30+ pages on this site, enforced through a custom WordPress template that scores each page 1–6 in the admin bar and is filed as UK Trademark UK00004359244. It answers the question nobody in that thread answered: here is what AI visibility work does that is distinct from SEO, here is how you score it, and here is what a page looks like before and after.

Priority Actions by Floor

ActionFloorTimelineWhy it matters
Check robots.txt for PerplexityBot, OAI-SearchBot, Google-Extended, ClaudeBot, Google-AgentFloor 1This weekUnblocking PerplexityBot produces citation activity within days of next crawl. Google-Agent is already visible in server logs — blocking it removes agentic evaluation entirely.
Run site:yourdomain.com on Bing; submit sitemap to Bing Webmaster ToolsFloor 1This weekBing powers ChatGPT Search and Microsoft Copilot. Pages absent from Bing are categorically ineligible for citation on both platforms regardless of content quality.
Add llms.txt to site rootFloor 1One weekExplicit AI access guidance at inference time. Not universally adopted yet — early adoption still confers a structural advantage over sites leaving it to crawl heuristics.
Run CITATE C1–C6 audit on ten priority commercial pagesFloor 2Two weeksIn a typical audit, 60–70% of H2 sections fail C3 (statistic with full context). Fixing C3 is the highest-impact single editorial change available. GEO-Bench: 41% citation rate improvement in controlled testing.
Add Person schema with named entity to every service or product pageFloor 2Two weeksPerson schema drives 2.3× higher citation rates — Onely, 2026. Named entities are the prerequisite for recommendation, not just citation. Without a named entity, AI can extract content but cannot recommend a provider.
Verify Google Business Profile; audit NAP consistency across top ten directory mentionsFloor 3Two weeksInconsistent entity signals reduce AI confidence in entity identity. Floor 3 requires consistent, independently corroborated signals — not variety.
Create Wikidata entity (if eligible)Floor 3One monthWikidata feeds Google’s Knowledge Graph and is referenced by multiple AI systems as a trust anchor. Absence does not prevent citation — presence accelerates entity recognition across all AI platforms simultaneously.
Complete sector-specific authority platform profiles (G2/Clutch for SaaS; legal directories for law firms; trade body listings for professional services)Floor 3One to three monthsChatGPT Search uses site: operators targeting G2 and Clutch specifically. These are no longer review sites. They are entity corroboration infrastructure that AI systems actively query when evaluating vendor recommendations.
Build an original data asset (calculator, benchmark, case study with named outcomes)Floor 3One to three monthsOriginal data earns citations that no competitor can replicate. The MDS drink driving calculator ranks for competitive terms and drives qualified leads because it provides verifiable, citable outputs that no self-promotional content can match.

The AI Visibility Audit maps each of these floors against your specific site, identifies which floor is the primary constraint, and produces a sequenced action plan. It is the starting point for every AI visibility engagement.

Start with the AI Visibility Audit →

Key Definitions

Rankings to recommendations shift
The structural change in digital discovery in which the commercially significant outcome of a search interaction moves from a ranked position in a link list to a named citation or recommendation in an AI-generated answer. First described in this framing by Sean Mullins, SEO Strategy Ltd, 2026.
Recommendation eligibility
The state a business reaches when AI systems have sufficient retrieval access, content extractability, and third-party trust signals to name the business as a recommended provider in response to a buyer-intent query — as distinct from merely using the business's content as an anonymous source.
The three-floor model
A diagnostic framework for AI visibility that maps the three sequential dependency layers determining whether a business is found, extracted from, and recommended by AI systems: Floor 1 (retrieval eligibility — can AI find you?), Floor 2 (content extractability — can AI cite you?), Floor 3 (recommendation eligibility — will AI name you?). Each floor is a prerequisite for the one above it. Developed by Sean Mullins, SEO Strategy Ltd, 2026.

Frequently Asked Questions

What is the difference between SEO rankings and AI recommendation?

SEO rankings determine where your pages appear in a list of links when someone searches Google or Bing. AI recommendation is the outcome when an AI system — ChatGPT, Perplexity, Google AI Overviews, or Microsoft Copilot — names your business as a suggested provider in response to a buyer-intent query. Rankings and recommendation are determined by different signals. A 2026 Ahrefs analysis of 863,000 keywords found that 62% of AI Overview citations now come from pages that do not rank in the organic top 10 — meaning the traditional assumption that strong rankings guarantee AI citation is demonstrably broken.

Why does Floor 1 failure matter if my Google rankings are strong?

Google rankings and AI citation eligibility are separate systems with separate infrastructure requirements. ChatGPT Search and Microsoft Copilot retrieve from the Bing index — a site absent from Bing is invisible to both platforms regardless of its Google position. Perplexity uses its own crawler, PerplexityBot, which must be explicitly permitted in robots.txt. A single NOARCHIVE directive can eliminate all Microsoft Copilot visibility. These failures produce no signal in Google Analytics or Google Search Console — they are invisible until you check the specific platforms where they are occurring.

What is the CITATE framework and how does it relate to Floor 2?

CITATE is the content citation standard developed by Sean Mullins at SEO Strategy Ltd (UK Trademark UK00004359244, March 2026). It defines six criteria that determine whether a page is extractable, evidenced, and attributable enough for AI systems to cite with confidence: C1 (standalone opening answer), C2 (explicit definition), C3 (statistic with full context), C4 (named source inline), C5 (named entity in body text), C6 (attributable claim). A page that reaches 6/6 CITATE has passed Floor 2. Whether it gets named as a recommended provider depends on Floor 3. CITATE is necessary but not sufficient for recommendation — it is the prerequisite.

How long does Floor 3 take to build?

For a business starting from a weak Floor 3 position — limited third-party editorial coverage, incomplete review platform profiles, no Wikidata entity — six to twelve months of sustained effort is the realistic timeline before meaningful AI recommendation visibility emerges. This is not because the work is slow. It is because Floor 3 signals — earned editorial coverage, review platform authority, entity corroboration across structured databases — accumulate over time and cannot be manufactured overnight. The businesses that start now will have compounding advantages twelve months from now that latecomers cannot buy.

Is this relevant for businesses that primarily use Copilot rather than ChatGPT or Perplexity?

Especially relevant. Microsoft Copilot is embedded across every Windows device, Edge browser, and Microsoft 365 application — meaning it is the mandated AI tool for procurement teams, finance departments, and operations managers in enterprise environments. It retrieves from Bing and uses sequential grounding, making Bing indexation a prerequisite for any Copilot citation. LinkedIn is a direct entity signal because Microsoft owns it — ensuring your LinkedIn company page and principal consultant profiles are complete and consistent with your website schema directly affects Copilot citation confidence. For the full Copilot-specific strategy, see the Copilot SEO guide.

How do I know which floor is my current constraint?

Three searches that take under ten minutes total. First: run site:yourdomain.com on Bing — if fewer than 70% of your commercial pages appear, Floor 1 is the constraint. Second: ask Perplexity a specific question your content should answer and check whether your site appears in citations — if it does not, and Bing indexation is clean, Floor 2 is the constraint. Third: ask ChatGPT "who are the best [your service] providers in [your market]?" and note whether you are named — if competitors are named but you are not despite appearing in Perplexity citations, Floor 3 is the constraint. The AI Visibility Audit maps this formally with platform-specific testing across ChatGPT, Perplexity, Google AI Overviews and Copilot.

Sean Mullins

Founder of SEO Strategy Ltd with 20+ years in SEO, web development and digital marketing. Specialising in healthcare IT, legal services and SaaS — from technical audits to AI-assisted development.

Ready to improve your search visibility?

Book a free 30-minute consultation and let's discuss your SEO strategy.

Get in Touch