I have been saying the same thing for over a decade: what’s the point of a pretty website if no one can find it?
For most of that decade, “finding it” meant Google. Someone typed a query, ten blue links appeared, they clicked. Your job was to be one of the ten, and ideally to be near the top. That model is not dead — but the landscape has got significantly bigger, and the challenge for every marketer and business owner right now is that the platforms are fragmenting. You have more bases to cover than ever.
Here is the part that most of the “agentic SEO” industry is not explaining clearly: the buyers haven’t gone anywhere. They’re still out there, with the same problems, the same budgets, the same need for a supplier they can trust. They’ve just moved into AI systems — and they’re asking much more complex questions than they ever put into a search box.
What actually happens when a buyer uses an AI agent today
Picture this. The marketing director of a mid-sized law firm needs to find an SEO consultant with specialist legal experience. In 2020 she would have Googled “law firm SEO consultant UK” and spent an afternoon clicking through websites. In 2026 she opens Copilot or ChatGPT and types: “Find me an SEO consultant who specialises in law firms, has demonstrable results with criminal defence or family law practices, and is transparent about pricing. UK-based or working remotely with UK clients.”
That is not a search query. It is a brief. And the AI does not show her ten links — it does the research for her.
Within seconds, the agent has searched across multiple sources, visited a shortlist of websites, read the service pages, extracted the case studies, cross-referenced what each consultant claims with what independent platforms say about them, checked whether the pricing is published or hidden behind a “contact us” form, and built a comparison. It then presents her with a recommendation — typically one name, or a shortlist of three at most.
She never visited your website. She never saw your homepage. The agent did the entire evaluation on her behalf — and either your business made the list or it didn’t.
This is what “agentic SEO” actually means at the commercial end. Not AI software automating your keyword research. Not a tool that generates meta descriptions at scale. The thing that matters is whether AI agents, acting as autonomous buyers on behalf of real humans, put your business on the shortlist.
Why the advice you’re reading is missing the point
BrightEdge published research showing that ChatGPT’s crawler activity doubled in a single month in early 2026. The research is real and the number is significant. Their strategic recommendation is to make your site fast, make it accessible to AI crawlers, make sure GPTBot is not blocked in your robots.txt.
That advice is correct. It is also the equivalent of telling a restaurant owner to make sure the front door is unlocked. Necessary, obviously, but it says nothing about the food, the service, the reputation, or whether anyone actually wants to eat there.
Getting GPTBot onto your site solves the access problem. It does not solve the trust problem. And it is the trust problem that determines whether the AI agent recommends you.
Think about how you personally decide whether to trust a business online. You do not go to their website and read their “About Us” page and take it at face value — because you know they wrote it about themselves, and of course they’re going to say they’re excellent. You check TripAdvisor. You look at Google reviews. You ask someone you know if they’ve used them. You look for the independent, third-party signal that confirms what the business is claiming about itself.
This is editorial versus advertorial. You trust the newspaper review more than the full-page advertisement because you know the advertisement was paid for. The review was not. That distinction — whether the source has a reason to be biased — is baked into how humans evaluate credibility. And it turns out it is also, increasingly, baked into how AI systems evaluate credibility.
AI systems are specifically designed not to simply take a business’s own website at face value for commercial recommendation queries. Research from AirOps confirms that AI tools actively downweight branded domains when generating vendor recommendations — they give less weight to what you say about yourself, and more weight to what independent sources say about you. The system has, in effect, learned the same instinct humans have always had: trust editorial, not advertorial.
The trust infrastructure that determines whether you’re recommended
I have been talking about “strong brands rank and dominate” for over a decade. That principle has not changed — it has just become more important and more measurable. A strong brand is one that independent sources confirm is what it claims to be. In the world of AI-mediated discovery, that confirmation has a name: entity corroboration.
Here is the architecture as it now stands. Think of it like a building. You need solid foundations before you can build upwards — and no amount of decorating the upper floors will compensate for a missing foundation.
Foundation — Can the AI find and identify you? This is where your Wikidata entry, your Google Business Profile, your consistent NAP (Name, Address, Phone) across every platform, and your schema markup do their work. If the AI cannot reliably identify your business as a specific, verifiable entity — not just a website — you are invisible before you even start. I have seen well-established businesses with ten years of trading history and strong Google rankings fail this test because their information is inconsistent across platforms. The AI cannot reconcile “SEO Strategy Ltd” on their website, “SEO Strategy” on their LinkedIn, and “Sean Mullins SEO” on their Google Business Profile. It treats them as possibly different entities and discounts confidence accordingly.
First floor — Can the AI retrieve your content? This is the layer that BrightEdge’s crawlability advice addresses, and it genuinely matters. AI crawlers — GPTBot for ChatGPT, ClaudeBot for Claude, PerplexityBot for Perplexity — operate on tight time budgets. If your page takes four seconds to load, the agent has already moved on. If your structured data is injected by JavaScript rather than rendered server-side, the agent may not see it at all. If you have blocked AI crawlers in your robots.txt, you are invisible by design. These are fixable technical problems and they should be fixed. But fixing them just gets you onto the site visit list — it doesn’t get you onto the recommendation list.
Second floor — Is your content extractable? There is a significant difference between content that reads well to humans and content that AI systems can extract and cite cleanly. The agent visiting your website is not reading it the way a human scrolls through a page. It is parsing it systematically, looking for claims that are specific, attributable, and verifiable. “We deliver results” does nothing for an AI system. “We took a criminal defence firm from zero first-page presence to seven position-one rankings for high-value queries, including #3 for ‘drink driving solicitors’ at 720 monthly searches” — that is extractable, specific, and cross-referenceable with publicly visible data.
Third floor — Does independent evidence confirm what you claim? This is the floor most businesses are missing. A Clutch profile with verified client reviews. A Wikidata entry with multiple corroborating properties. Editorial mentions in industry publications. LinkedIn articles that other people have engaged with and shared. Case studies that name real clients with specific, measurable outcomes. These signals tell the AI that what your website says about you is not self-promotional — it is independently confirmed.
I use the analogy of a court case. Your website is your own testimony. Compelling, hopefully, but the jury knows you’re defending yourself. The independent signals — the third-party reviews, the editorial mentions, the knowledge graph entries — are the witnesses. And as any good barrister knows, the witnesses matter more than the testimony.
Roof — Can the AI agent actually act with you? This is the emerging frontier. As AI agents move from recommending businesses to transacting with them — booking consultations, comparing quotes, submitting enquiries on behalf of users — the businesses that make their services machine-readable will have an advantage. Published pricing (not “contact us for a quote”), clear service descriptions with scope and deliverables, fast-loading contact pathways, and emerging infrastructure like llms.txt all contribute to this. Our own AI assistant is a live demonstration of this principle — a client or prospect can ask it questions about our services and get structured, accurate answers without having to navigate the site. That is an agent-ready interface built now, before it becomes table stakes.
The proof is twenty years of watching this pattern repeat
I built my first website in 2005. I have watched this pattern play out across every major shift in search since then, and the fundamentals have never changed.
One of our clients — a dog walker in Portsmouth — has ranked number one for “dog walker Portsmouth” for seventeen years. Seventeen years. Not because they have an enormous marketing budget. Not because they have a sophisticated technical SEO strategy. Because Google has spent seventeen years watching real people search for that service, find that website, use that business, and come back satisfied. That consistency of genuine usefulness builds a trust signal that no technical trick can replicate. The principle is identical to what AI systems are now building — not on click behaviour, but on entity verification, independent corroboration, and consistent expertise signals.
The advice I have given for over a decade — build a strong brand, be consistent, build genuine authority in your sector — is more relevant now than it has ever been. The mechanism has changed. The principle has not.
What has changed is the urgency. Because the AI agents evaluating your business for that marketing director’s brief are not going to wait twelve months for your brand to develop. They are running that evaluation right now, with the signals that currently exist. If those signals say “ENTITY_SUPPLIED_ONLY” — meaning the only entity that confirms this business is what it claims to be is the business itself — the AI will use your content as a source but it will not put you on the shortlist. The businesses already on the shortlist are there because they built the independent evidence stack before the query volume arrived. That is the same early-mover advantage that has always defined who wins in search — it has just moved upstream into the trust layer.
What this means practically, right now
Before you commission a single piece of new content, do this diagnostic. Open ChatGPT, Copilot, and Perplexity. Type your business category followed by “consultant” or “agency” and your location. Read the response. Is your business named? How is it described? What is named instead of you?
If you are absent, the question is not “how do I produce more content?” The question is which layer is failing. I have written a diagnostic guide that maps the five failure modes and their symptoms — because the right fix for a Layer 1 entity recognition failure is completely different from the right fix for a Layer 3 content extraction failure, and spending money on the wrong fix is worse than spending nothing.
The most common failure I see in 2026 is businesses with genuinely strong content and real expertise — businesses that should be on every AI recommendation shortlist — who are invisible because they have no Clutch profile, no complete Wikidata entry, and no editorial mentions outside their own publishing. The content is there. The trust architecture is not. And in a world where AI systems are specifically designed to discount self-published claims, the trust architecture is what separates the businesses that get recommended from the businesses that get cited anonymously at best and ignored at worst.
The buyers are still out there. They’re just asking different questions, in different places, to systems that evaluate you differently than any human researcher ever did. What’s the point of a beautiful, content-rich website if the AI agent doing the buying research never puts you on the list?
That is the question agentic SEO — properly understood — is here to answer.
Sean Mullins is founder of SEO Strategy Ltd, a Southampton-based SEO consultancy with 20+ years of experience helping B2B businesses, law firms, SaaS companies and professional services firms build genuine, durable visibility — in Google, in AI systems, and in the trust layer that underpins both. The AAO framework, the AI Discovery Stack, and the entity corroboration framework are published and freely available at seostrategy.co.uk.