Most businesses asking about MCP are not ready for it to matter. Not because they are behind — because they are working on the wrong layer of the system entirely.
Here is the thing that changes how you think about all of it: AI does not rank businesses. It selects them. There is no position three. There is no page two. An AI system evaluating providers in your sector either names your business in its response or it does not. The mechanism determining which outcome you get is not your MCP implementation. It is whether the AI system has enough to go on to select you with confidence — and that is determined by floors, not by technology.
According to Seer Interactive’s analysis of twelve million visits in 2025, traffic arriving via AI citation converts at 14.2% compared to 2.8% for standard organic search. That gap exists because a visitor arriving via AI citation has already received a recommendation before they land on your page. They are not browsing. They are following through. The businesses capturing that traffic are not the ones who implemented MCP first. They are the ones whose floors are solid.
First — SEO is not being replaced. It’s the building.
Because this is the question underneath every question about AI agents, MCP and WebMCP: do I abandon SEO and focus on the AI thing?
No. The framing is wrong.
SEO is not a channel being disrupted by AI recommendation — it is the foundation and the first two floors of the building that AI recommendation requires. The technical SEO work you have been doing — crawlability, indexation, site speed, schema markup, structured content — is Floor 1 and Floor 2. The entity work — Wikidata, NAP consistency, Google Business Profile, digital PR — is Floor 1. The content strategy work — topical authority, structured answers, named authorship — is Floor 2 into Floor 3.
You are not starting over. You are looking at the building you have already been constructing and understanding which floor you are currently on. That changes what you do next. It does not change the fact that the building is worth having — or that the investment you have already made is the right one.
What MCP actually is — and what WebMCP is not
Model Context Protocol (MCP) is a standard developed by Anthropic that defines how AI systems connect to external tools, data sources and services. Think of it as the USB-C of AI integrations — instead of every AI system needing a custom connection to every external tool, MCP provides a shared protocol so any compatible AI can connect to any compatible tool.
Server-side MCP requires dedicated backend infrastructure. Implementation costs typically run from £5,000 to £30,000 or more depending on scope. The businesses implementing it right now are predominantly e-commerce companies, SaaS platforms and enterprises with operational data — inventory, CRM, calendar, pricing — that AI agents need in real time to take meaningful actions on a user’s behalf.
WebMCP is the browser-native variant. Where server-side MCP exposes data via a backend server, WebMCP enables AI agents operating in a browser context to interact with page-level tools without a custom server build. Chrome 146 carries it as a feature flag. It is in active specification development. The barrier to experimenting is lower than server-side MCP — but both are Floor 4 technologies. And Floor 4 is commercially inert until Floors 1, 2 and 3 are solid.
The building your business needs to occupy first
The four-floor model is a dependency chain, not a menu. You cannot skip floors. An AI agent that cannot find your entity cannot extract your content. An AI system that cannot extract your content cannot build the trust required to recommend you. An AI agent that has not selected you has no reason to execute actions on your behalf at Floor 4.
This is not a framework for optimists. It is a diagnostic. And the diagnostic for most businesses is that they are not ready for the conversation they are having.
Floor by floor — where businesses fail, and what it takes to fix it
Floor 1 — Entity Foundation & Discovery
Effort: days to a few weeks. Mostly audit and remediation work.
The most common Floor 1 failures are invisible from inside a traditional SEO workflow. Wikidata entry absent or incomplete. NAP data inconsistent across directories. Bing has not indexed key pages. llms.txt file missing.
Bing matters more than most Google-focused practitioners realise. ChatGPT Search and Microsoft Copilot use the Bing index for retrieval. A page not indexed by Bing is invisible to two of the largest AI-powered search surfaces in market — regardless of where it sits in Google. That is not a future risk. It is happening now.
The good news: Floor 1 is mostly audit and fix work. It does not require new content, new technology or significant ongoing investment. A Floor 1 audit typically takes days. The remediation — claiming profiles, correcting inconsistencies, submitting to Bing, setting up llms.txt — is weeks of implementation, not months. For most businesses, this is the highest-leverage, lowest-cost work available right now. See the AI Visibility Action Plan for the diagnostic sequence.
Floor 2 — Content Extractability
Effort: weeks to a month per priority page cluster. Structural content changes.
If your content cannot be extracted cleanly, it cannot be cited. If it cannot be cited, it cannot be recommended. That is the entire Floor 2 argument.
Content written for human readers is often not structured for AI extraction. An opening paragraph that builds to its point fails Floor 2. A statistic without a named source in the same sentence fails Floor 2. A page written in collective voice without naming the person or organisation responsible fails Floor 2. The CITATE framework defines the six structural criteria that take a page from retrieved to citable. Research from Princeton, Georgia Tech and IIT Delhi found that structured content interventions consistent with these criteria improve AI citation rates by 30–40%.
The investment here is editorial, not technological. Rewriting opening paragraphs, adding inline definitions, adding named-source statistics, naming the author in the body text. Per page, this is hours of work. Across a site, it is a structured programme of content improvement — and it compounds into traditional SEO performance simultaneously.
Floor 3 — Trust & Selection
Effort: months, ongoing. Cannot be rushed or manufactured.
This is where AI selection is actually determined — and it is entirely outside your own content.
AI systems do not select businesses based on what those businesses say about themselves. They select based on what independent sources say: editorial coverage in publications you did not write, review platform profiles with genuine volume and recency, structured entity databases, named professional credentials. The selection mechanism is corroboration, not assertion. A business that has only ever declared its own expertise is invisible at Floor 3 regardless of how good its website is.
Two independent studies now quantify the scale of Floor 3’s dominance in the citation data. In September 2025, researchers at the University of Toronto found AI citing third-party authoritative sources 92.1% of the time in consumer electronics and 81.9% in automotive across 13 industries. Muck Rack’s Generative Pulse team, analysing over one million AI response links between July and December 2025, found 82% of all AI citations come from earned media. The practical implication: the entire discipline of on-page content optimisation — including Floor 2 — is competing for the remaining fraction. Floor 2 is the prerequisite. Floor 3 is where selection is actually determined.
The 14.2% vs 2.8% conversion gap lives here. The businesses capturing AI-cited traffic at five times the conversion rate of organic traffic have Floor 3 authority that took time to build. For a business starting from a weak Floor 3 position, six to twelve months of sustained effort is the realistic timeline before meaningful AI recommendation visibility emerges. The businesses that start now will be in a structurally stronger position in twelve months than those waiting for the MCP conversation to feel more urgent. For sector-specific guidance, see Law Firm SEO and Software & SaaS SEO.
Floor 4 — Agentic Execution
Planning horizon: 2026–2027. No significant investment required yet.
Floor 4 is where MCP and WebMCP live. An AI agent operating here has already worked through Floors 1, 2 and 3 — it has found your entity, extracted and understood your content, and selected you as a credible provider. Now it needs to act: book a consultation, retrieve a document, submit a form, query your availability.
The honest timeline: WebMCP is in active specification today. Broader browser vendor adoption is a 2026 horizon. Mainstream business deployment — where a meaningful proportion of commercial buyers are reaching businesses through agent-mediated interaction rather than direct search — is realistically 2026 to 2027. The businesses implementing server-side MCP right now are early movers with specific operational data use cases. For most businesses, the right Floor 4 posture today is informed preparation, not active implementation. The Agentic SEO guide and the WebMCP guide cover what preparation looks like in practice.
One immediate check: make sure Google-Agent is not blocked in your robots.txt. Google-Agent is the user agent Google’s AI systems use when browsing on a user’s behalf — a catch-all disallow rule will block the agentic evaluation layer from reaching your pages entirely.
The timeline is also accelerating. Google’s TurboQuant announcement in March 2026 — a vector indexing technique that reduces database index build time to virtually zero — has been identified by analysts including Marie Haynes as a likely factor in the March 2026 core update. If correct, Google can now run semantic matching across hundreds of results rather than the top 20–30 it could previously afford to process. Traditional ranking signals become less decisive at that scale. Entity recognition, semantic authority, and the kind of machine-evaluable structure that CITATE and Floor 3 build — these become more important, not less. The original TurboQuant research paper was published in April 2025, giving Google a full year to integrate it before the March 2026 core update. The businesses building their floors now are building into a system that is becoming more selective, not less.
What this means for your SEO investment
The businesses asking whether to invest in MCP are often asking the wrong question because they are framing AI as a separate channel requiring a separate budget. It is not.
AI recommendation is the upper layer of the same visibility infrastructure that SEO has always been building. The ROI case is straightforward: traditional SEO investment produces traffic at known conversion rates. AI citation produces less traffic but at significantly higher conversion rates — the 14.2% versus 2.8% figure is not a marginal improvement, it is a structural difference driven by the nature of the recommendation itself. As AI-driven discovery grows as a share of total search behaviour — and the data showing “what is agentic ai” at 33,100 monthly searches with +174% YoY and “ai strategy consultant” at 1,000/mo confirms it is growing — the businesses with solid floors will disproportionately capture the higher-converting channel.
You do not need a separate AI strategy budget if your SEO and AI investment is correctly directed. You need Floor 1 remediation (low cost, one-time), Floor 2 content improvements (moderate, ongoing), Floor 3 authority building (sustained, long-term) and Floor 4 awareness without premature investment. That is a reframing of existing work, not an addition to it. The way we work is built around exactly this sequence.
What to do this week
One — check your Bing indexation. Search site:yourdomain.com in Bing. If fewer than 70% of your key commercial pages appear, you have a Floor 1 problem costing you ChatGPT and Copilot visibility today.
Two — check your robots.txt for Google-Agent. If you have a catch-all disallow rule for unrecognised user agents, add User-agent: Google-Agent / Allow: / explicitly. This takes two minutes. Getting it wrong means the agentic evaluation layer cannot reach your pages regardless of everything else you do.
Three — audit your three most important commercial pages against CITATE. Does the opening paragraph stand alone as a complete answer without surrounding context? Is every technical term explicitly defined? Does a named-source statistic appear inline? If not, Floor 2 is failing on your most important pages. The AI Citation Checklist walks through the audit.
Four — ask the AI systems where you stand. Ask ChatGPT, Perplexity and Google AI Overviews to recommend providers in your sector. Note exactly who appears. That is your Floor 3 benchmark. The gap between where you appear and where your competitors appear is the trust signal deficit you need to close. The AI Visibility Audit maps this formally.
Five — check your Wikidata entry. Search wikidata.org for your business name. If there is no entry, or the entry is incomplete, this is a Floor 1 fix that directly affects how AI systems identify and corroborate your entity — and it costs nothing to address. See the Wikidata SEO guide for the process.
The lift is coming. Is your building ready?
Floor 4 is being built. The lift is coming. But the lift only stops at buildings that are ready for it — and readiness is determined by floors, not by technology adoption.
AI does not rank businesses. It selects them. The selection criteria are already in place, already being applied, and already producing the 14.2% conversion differential for the businesses that meet them. The work available right now — getting your entity found, your content extractable, your trust signals built — is the same work that makes your existing SEO investment compound rather than depreciate.
Start with the floors. The lift will take care of itself.