I want to be honest about something that most SEO consultants won’t say publicly.
I have an Answer Engine Optimisation page that is, by most measures, better than the pages ranking above it. It covers the Answer Intent Framework in detail — definitional intent, procedural intent, comparative intent, evaluative intent. It covers People Also Ask optimisation, featured snippet capture, voice search, Speakable schema, FAQ structured data, the relationship between AEO and E-E-A-T, how to measure AEO performance, and how AEO maps onto AI-native search engines including Perplexity, ChatGPT Search and Google AI Overviews.
I built it that way deliberately, because I believe in doing this properly. I practice what I advise clients to do.
And yet, when you search “AEO SEO consultant” in Google right now, the AI Overview names other agencies. Not me.
I spent time today auditing exactly why. The answer is one of the clearest illustrations I’ve seen of the gap between what most people think AI visibility means and what it actually requires — and I think it’s worth being transparent about, because the gap is almost certainly affecting your visibility too.
Two types of AI visibility that require completely different strategies
Most of the advice circulating about AI search optimisation treats AI visibility as a single thing. Get your content structured properly. Add schema. Build topical authority. Answer questions clearly. All of that is correct — but it only addresses one type of AI visibility.
There are two, and they work through entirely different mechanisms.
Topical visibility is what most people are optimising for. It’s being cited when AI answers a question about a subject — “what is AEO”, “how do featured snippets work”, “what structured data helps AI cite my content.” This type of visibility is won through content quality, structural optimisation, entity authority, and schema. Your page matters here. The work most AI visibility advice focuses on.
Provider visibility is different. It’s being named when AI generates a recommendation list — “best AEO consultant UK”, “AEO agency”, “who should I hire for answer engine optimisation.” For this type of query, Google’s AI isn’t primarily reading your service page. It’s reading third-party sources that mention you: roundup articles, directory listings, review platforms, industry publications, editorial references.
This distinction is why I can have a better AEO page than my competitors and still not appear when someone asks Google’s AI to recommend an AEO consultant. The AI isn’t comparing our pages for that query. It’s comparing the external evidence that we exist as relevant providers in this category.
And on that dimension, I have almost nothing.
What Google’s AI is actually doing when it compiles a provider list
When Google generates an AI Overview for a query like “best AEO consultants in the UK”, it’s drawing on its grounding process — anchoring its generated answer to specific web sources. For informational queries, those sources are often your own pages. For commercial provider queries, they’re overwhelmingly third-party sources.
The agencies that appear in that AI Overview have been named in “best AEO agency UK” articles written by independent publishers. They appear on Clutch with reviews and case studies categorised under AEO as a service. They’ve been referenced by name in industry publications covering AI search. Google’s AI treats these external mentions as independent corroboration — multiple non-affiliated sources confirming that this entity is a legitimate, relevant provider for this category.
My service page, however good, is self-published evidence. It tells Google’s AI that I think I’m a good AEO consultant. External sources tell Google’s AI that others recognise me as one. For commercial recommendation queries, the second type of evidence is what drives inclusion.
This isn’t unique to AEO. The same mechanism operates for every “best [service] [location]” or “[discipline] consultant” type query. If you’re a consultant or agency and you’re not appearing in these AI-generated provider lists, the gap is almost certainly off-page — not on-page.
The specific sources that determine provider visibility
Through auditing what appears and what doesn’t, I’ve identified the sources that carry the most weight for AI provider recommendation queries. These aren’t guesses — they’re what I can see being cited when I look at who appears and why.
Review and directory platforms. Clutch is the most important single platform for professional services provider visibility in AI search. It’s structured, high-authority, and explicitly categorises providers by service type with verified client reviews. When an AI is assembling a list of recommended AEO consultants, Clutch is one of the primary sources it draws from. If you’re not listed here with the right categories, you don’t exist in this context regardless of how good your website is.
Editorial roundup articles. “Best AEO agencies 2026”, “top AI visibility consultants UK” — articles like these, published on marketing and SEO industry sites, function as editorial endorsement in the AI’s assessment. A single mention in a credible roundup article is worth more for provider visibility than multiple well-optimised service pages. This is why digital PR isn’t just a link-building exercise anymore — it’s a direct input into AI recommendation systems.
LinkedIn articles and posts. According to AthenaHQ’s State of AI Search 2026 report, LinkedIn content appears in approximately 4% of AI responses across platforms. That’s the fifth highest off-page citation source. Articles published here — with your name, your methodology, your frameworks, and links back to canonical source pages — create external attribution signals that AI models train on and retrieve from. This article is doing that work.
Knowledge graph entries. Wikidata, Crunchbase, Google Knowledge Panel data. These are the structured sources AI systems use to verify entity identity and understand what a business or person is authoritative on. If your business and your name aren’t in these databases with the right service categories and expertise declarations, the AI has reduced confidence naming you specifically rather than describing the category generally.
Named mentions in industry publications. Being quoted, cited, or referenced by name in publications covering AI search — Search Engine Land, Search Engine Journal, specialist marketing publications — creates the kind of authoritative external attribution that compounds over time. The more often your name appears in credible sources in the context of a specific discipline, the more confidently AI systems associate you with that discipline.
Why I’m being transparent about this
I considered whether to write this. It could read as an admission of a gap in my own visibility strategy — which it is — or as an argument that on-page work doesn’t matter — which it isn’t.
I’m writing it because I think the shared-problem framing is more useful than the polished-expert framing, and because the people who need to understand this mechanism are going through exactly what I’m going through.
Most AI visibility content is written from a position of authority about a process the author has already completed. “Here’s what we did, here are the results.” This is fine, but it doesn’t help you when you’re in the middle of a gap that you’ve correctly diagnosed but haven’t yet fixed.
I’ve correctly diagnosed the gap. The on-page foundation is as solid as I can make it. The AEO page covers the discipline in more depth than almost anything else published on the topic in the UK. The entity architecture, schema, and structured data are all in place. The AI Discovery Stack — the five-layer framework I published this week — maps exactly where AEO sits in the broader AI visibility pipeline: Layer 3 (Selection) and Layer 4 (Recommendation), addressing how AI systems qualify and prioritise sources before deciding whether to name them.
But none of that on-page work creates the third-party corroboration that provider recommendation queries require. And the honest answer is that most businesses in my position — doing the on-page work properly, not yet well-represented in external editorial sources — are invisible to AI-generated recommendation lists despite having better content than the businesses that appear.
What the fix actually looks like
For anyone in the same position, here is the sequence that matters.
Get onto Clutch first. It’s the highest-leverage single action for professional services provider visibility in AI search. Create a profile, categorise yourself accurately, and ideally get one or two verified reviews. This alone changes your representation in the sources AI draws from for provider queries.
Pursue editorial roundup mentions deliberately. Not through spray-and-pray outreach, but by identifying the specific publications that write “best [your category] UK” articles and making the case for inclusion with evidence — case studies, specific results, a clear point of differentiation. The AthenaHQ data showing a 3× gap between the Share of Voice of the top-cited brand versus the average in their category is instructive here: the businesses winning at AI visibility aren’t just good, they’ve been explicitly named by multiple independent sources.
Build your knowledge graph presence. A Wikidata entry for your business and your personal entity — with your service categories, your methodologies, and your notable work declared — creates a structured reference that AI systems use to verify and reinforce who you are and what you do. This is slower-compounding than Clutch or editorial mentions, but it’s the most durable foundation.
Publish attributed frameworks. The AI content experiment that circulated this week — 20 sites, 2,000 AI-generated articles, complete ranking collapse at 90 days — illustrated exactly why generic content doesn’t build durable AI visibility. What does build it: named methodologies with creation dates, attributed to specific practitioners, corroborated across multiple independent sources. The AI can’t fabricate the provenance chain. It can only retrieve it.
The practical question worth asking today
Search for your service category plus “consultant” or “agency” in Google. Look at the AI Overview. If you’re not in it, ask yourself honestly: is that because my content isn’t good enough, or because there are no third-party sources naming me as a relevant provider in this category?
For most businesses, it’s the second. And the fix isn’t another page.
Sean Mullins is founder of SEO Strategy Ltd, a Southampton-based SEO consultancy specialising in AI-first visibility, entity SEO, schema architecture and LLM optimisation. He has been building websites and SEO strategies since 2005.