Complete Guide

AI Platform Strategy: Start With Your Buyer, Not the Algorithm

Most AI visibility strategies are optimising for the wrong platform. If your buyers are using Microsoft Copilot and you are not optimising for Bing, you do not exist to them. This complete guide covers seven platform profiles with user psychology, the Bing imperative for B2B enterprise, the training data pathway most strategies ignore, conversational query research, the cost of delay in earned media, voice and ambient AI, and the Platform-Audience Stack — the named diagnostic framework for building AI visibility strategy from the audience out.

27 min read 5,452 words Updated Apr 2026

AI visibility strategy is the practice of identifying which AI platforms your specific buyers use and building the entity, content, and trust infrastructure to be cited in those platforms' answers. The Platform-Audience Stack — developed by Sean Mullins, SEO Strategy Ltd, 2026 — is the named diagnostic framework for building this strategy from the audience out rather than from platform popularity. It maps four buyer segments (Consumer, SMB, Enterprise, Regulated Enterprise) against six audience contexts (General, B2B Tech, Legal, Healthcare, Financial, Government) to identify the primary AI discovery surface for any given buyer profile. Getting the platform order wrong means investing in visibility that your buyers will never see.

80% of all AI referral traffic to websites comes from ChatGPT, based on SE Ranking data from 101,000+ sites (January 2026) — an aggregate figure that misrepresents B2B enterprise AI behaviour entirely SE Ranking
0.24% of global internet traffic comes from all AI platforms combined as of January 2026, up from 0.15% — the number that exposes how early this market still is SE Ranking
82% of AI citations come from earned media, based on analysis of over one million AI response links across ChatGPT, Gemini, Perplexity and Claude (Muck Rack, December 2025) Muck Rack
92.1% of AI citations in consumer electronics came from third-party authoritative sources rather than brand-owned content, confirming AI citation is primarily a selection-layer problem (University of Toronto, September 2025) University of Toronto
2% overlap between the journalists PR teams pitch most frequently and the journalists whose work AI models actually cite — a measurement problem, not a quality problem (Muck Rack, December 2025) Muck Rack

If your buyers are using Microsoft Copilot and you are not optimising for Bing, you do not exist to them.

Not 'you are less visible.' Not 'you are harder to find.' You do not exist. When they ask Copilot which suppliers to consider, which solutions fit their compliance requirements, which vendors their peers are using — Copilot queries Bing. If your site is not in Bing's index, you are not in the answer. You are not on the shortlist. You were never considered.

This is happening right now, at scale, across NHS trusts, financial institutions, law firms, and government departments — some of the highest-value B2B buying environments in the economy. And it is happening because the AI visibility industry has spent two years producing guidance that is almost exclusively oriented around Google, with almost no attention given to the question that should precede every platform decision:

Which AI is your customer actually using — and what are they using it for at the exact moment your category becomes relevant to their decision?

That question is not being asked. This guide answers it.

AI visibility is not about platforms. It is about environments. The platform a person uses is determined by the environment they work in, the tools their organisation approved, the habits their role requires, and the type of task they are performing when they become relevant to your business. Get the environment right and the platform follows. Skip the environment and you optimise with precision for the wrong surface.

Everything that follows is built on that principle.

Part One: The Reality of AI Search in 2026 — What the Numbers Show and What They Miss

The Aggregate Picture

The numbers that circulate in AI search commentary are real, and they describe something true — at the aggregate level.

According to SE Ranking data from more than 101,000 sites, ChatGPT accounts for roughly 80% of all AI referral traffic to websites. Google Gemini more than doubled its referral traffic between November 2025 and January 2026, passing Perplexity in that period — sending 29% more visitors globally and 41% more in the US. All AI platforms combined account for approximately 0.24% of global internet traffic as of January 2026, up from 0.15% the previous year.

These are the facts. They belong in any serious treatment of this subject. They are not the foundation of a complete strategy.

Here is what aggregate referral traffic measures: someone clicked a link inside an AI response and landed on a website. It does not measure how many buying decisions were influenced without a click. It does not measure how many research sessions shaped a shortlist that was never attributed to AI in any analytics report. And critically, it does not capture the queries happening inside enterprise environments through Copilot — where the AI is answering questions about your category every working day and the traffic signal never appears in anyone's dashboard.

Using aggregate referral data as a strategic foundation tells you what the majority of the market is doing. It tells you nothing specific about whether the majority is your market. For many B2B businesses, it isn't.

Three Forces Fragmenting the AI Audience

The AI search market is not converging toward a single dominant platform. It is fragmenting — and faster than the strategy frameworks built to address it.

Platform proliferation. In three years the AI assistant landscape has gone from one dominant product to seven or more with meaningfully distinct user bases. ChatGPT, Gemini, Claude, Perplexity, Copilot, DeepSeek, Grok — each with different architectures, different retrieval behaviours, different citation patterns, and different user profiles. Users are not choosing one and staying. They are distributing queries across multiple tools based on task type, context, and what kind of answer they need.

Enterprise IT policy. In regulated industries and large organisations, AI tool access is not a personal preference. It is a procurement decision. Many enterprise employees have exactly one AI tool on their desktop: Microsoft Copilot, deployed through their organisation's Microsoft 365 licence. They did not choose it. Their IT department rolled it out and it now sits inside every application they use. When they are in research mode — evaluating suppliers, comparing solutions, building briefing documents — they reach for the AI that is already there.

Modality shift. AI queries are no longer only typed into browser interfaces. Copilot is embedded in Windows 11. Gemini is integrated into Android. Siri is acquiring large language model capabilities. Apple Intelligence is live across iOS and macOS. The query that happens when someone speaks to their phone while commuting, asks their laptop assistant mid-meeting, or uses an earpiece to get a hands-free answer — that query reaches AI through a fundamentally different interface, in a more situational and conversational form, and often expects a single confident answer rather than a list. A strategy that assumes AI queries happen in a chat window is already behind the behaviour it is trying to influence.

Part Two: Seven Platforms, Seven Distinct Environments

The Platform Map — With What It Actually Means for Your Strategy

The table below is not a ranking. It is a map. The sixth column is the one that matters for action. Read it as a decision, not a description.

PlatformPrimary User ProfileHow They Choose ItHow It Appears in B2BVisibility PriorityPrimary Action If This Is Your Audience
ChatGPTBroadest adoption — consumers, SMEs, professionals, early adoptersDefault first choice; familiar, widely trustedIndividual and Teams licences; self-installed; widely used for general researchHigh for almost all businessesPrioritise Google and Bing indexation, structured content, CITATE extractability
Google GeminiGoogle Workspace users; mobile-first; growing fast via Android and SearchLowest friction for existing Google users; default in Google productsGoogle Workspace deployments; increasingly embedded in SearchCritical and growing — trajectory matters as much as current shareStrong Google organic performance carries directly; treat as Google-adjacent
Microsoft CopilotCorporate employees in Microsoft 365 environmentsNot chosen — mandated by IT; deployed across the organisationThe default AI in regulated enterprise: healthcare, legal, finance, governmentCritical for B2B enterprise and almost entirely unaddressed by current AI visibility adviceBing Webmaster Tools, sitemap submission, IndexNow, Bing crawl audit — before anything else
PerplexityResearch-oriented professionals; journalists; academics; citation-conscious buyersActively chosen for sourced answers; deliberate preferenceIndividual use in knowledge-worker roles; growing in professional researchHigh citation density per response; disproportionate value for verification-minded audiencesBuild editorial coverage in the outlets Perplexity consistently cites for your category
ClaudeSophisticated users; long-document workers; privacy-consciousDeliberate choice — active preference, not the defaultLess common in enterprise mandates; chosen individuallyStrong for complex, analytical, long-form contentDepth and structure over volume; original frameworks and attributed claims
DeepSeekTechnical users; developers; cost-conscious organisations; API buildersOpen-source appeal; developer community adoption; price sensitivityEmerging in developer and technical teams; used via APIRelevant for technical and developer-facing businessesTraining data presence and entity infrastructure rather than retrieval-only tactics
GrokX/Twitter power users; news-followers; real-time information seekersPlatform integration with X; appeals to users who distrust mainstream AINiche in traditional B2B; more relevant for media and current-events-adjacent brandsNarrow but concentrated for the right audienceConsistent, citable presence in the publications X users share and amplify

The Psychology Behind Platform Choice — And Why It Changes What You Should Produce

Platform choice reveals something that keyword research never could. It tells you not just what someone is looking for but what kind of answer they are prepared to trust, how they will verify it, and what they will do with it.

The ChatGPT user is often in exploration mode. Researching something unfamiliar, drafting something that needs a capable collaborator, or saving time on a task they already know how to do. Source attribution matters less to them than it does to other profiles. They will frequently act on an answer without tracing where it came from.

The Perplexity user has made an active choice. They specifically selected a platform that shows its sources. They are a professional with a verification requirement, a journalist checking a claim, or someone whose role carries liability if they act on inaccurate information. When they click a Perplexity citation they are acting on it with higher intention. Lower volume, higher quality audience behaviour.

The Copilot user did not choose anything. Copilot was there when they opened Outlook. It appeared in Teams. It shows up in Word when they are drafting a supplier evaluation document. Their queries are work-task-adjacent, their context is professional, and the retrieval layer underneath them is Bing. Understanding this user is not about psychology — it is about infrastructure.

The Claude user made a deliberate decision to use something that is not the default. That act of choosing signals something: they are comfortable with AI, they have formed an opinion about which tool handles complex thinking better, and they tend to ask harder questions with more context. Original, deeply structured, attributed content performs better with this audience than technically correct but thinly argued content.

Training data matters more for DeepSeek users than for any other group on this list. For developers accessing DeepSeek via API, what the model knows about your business from training is often more relevant than what it can retrieve in real time. Different investment. Different timescale. Different strategic priority.

Part Three: The Bing Imperative — The Most Important Insight Not Being Written About

You can rank number one on Google and still be invisible to the buyer making the decision.

If that buyer works in an NHS trust, a financial services firm, a law firm, or a government department — if they operate inside a Microsoft 365 environment — their daily AI tool is Copilot. Not by preference. By policy. And Copilot retrieves from Bing.

This is not a secondary consideration to add to a checklist later. It is the primary AI surface for a specific, commercially important, and almost entirely overlooked audience segment. An SEO professional who advises a B2B healthcare technology client to focus on Google AI Overviews without asking about their Bing visibility is not giving incomplete advice. They are giving the wrong advice for the wrong platform to the wrong audience.

The Sectors Where This Is Not Optional

Healthcare and the NHS run on Microsoft. NHS trusts, integrated care systems, and clinical commissioning groups are Microsoft 365 environments at scale. Their procurement teams, IT leads, and clinical technology decision-makers have Copilot as their default AI. When they research solutions, evaluate vendors, or compare clinical IT systems, a significant proportion of that research happens through Copilot.

Financial services standardises on Microsoft because of security posture, compliance tooling, and enterprise support structures that Microsoft's ecosystem provides better than any alternative at scale. The compliance officer at a bank, the IT director at an insurer, the procurement lead at an asset manager — these are professionals in Microsoft environments whose working AI is Copilot.

Legal services firms at scale are Microsoft shops. Large law firms have run on Outlook, Teams, and SharePoint for twenty years. Copilot is the AI that arrived inside those tools. Research workflows, vendor evaluations, competitive analysis — all increasingly running through Copilot.

Government and public sector in the UK: Microsoft 365 dominates. The G-Cloud framework, central government standardisation, and public sector security requirements consistently point toward Microsoft infrastructure. Copilot is becoming standard issue across departments.

Corporate enterprise generally: any organisation running Microsoft 365 at significant scale has Copilot available to employees with appropriate licences. Many have no other approved AI tool. For the employee researching your category, Copilot is the option, not one of several options.

The Fix — and What It Actually Requires

The mechanics of Bing optimisation are not complex. Bing Webmaster Tools verification, sitemap submission, IndexNow implementation for near-instant indexing of new and updated content, a Bing-specific crawl audit to surface errors that don't appear in Google Search Console, Microsoft Clarity for behavioural analytics, and Bing keyword data for understanding query patterns in your category.

Setting up the mechanics costs time but not significant budget. Much of it is free.

What requires expertise is the strategy around it: knowing that Bing matters for your specific audience, knowing which content to prioritise, understanding how Bing's entity recognition differs from Google's, sequencing the investment against other platform priorities, and measuring impact in an environment where Copilot citations are not straightforwardly tracked in standard analytics. The mechanics are learnable in an afternoon. Knowing when they matter, for which audience, and in what order — that is the strategic work.

A business that verifies in Bing Webmaster Tools and calls it done has addressed the infrastructure. A business that understands why Bing matters for their specific buyers and builds a complete content and entity strategy around it has built a competitive position their Google-only competitors cannot see and are not building against.

Part Four: The Query Language Nobody Is Researching

There is a gap between the language of keyword research and the language of AI queries. Closing that gap is one of the most significant untapped opportunities in AI visibility strategy — and almost no one is addressing it practically.

SEO keyword research was built for compressed queries: short phrases typed into a box, matched to documents containing those phrases or their semantic neighbours. The user compressed their need into a keyword. The search engine expanded it into results.

AI queries do not work this way. People ask AI the question they actually have, in natural language, with the context they want the AI to understand. A compliance officer does not type "managed file transfer HIPAA compliant" into Copilot. They ask: "We need to transfer large encrypted files between our trust and external contractors while maintaining Cyber Essentials compliance — what solutions do NHS organisations typically use for this?"

That is not a keyword. It is a situation description. The AI's response is built from an understanding of the situation — the organisation type, the compliance constraint, the specific context, the peer reference — not a match to terms.

The practical implication is significant and consistently missed: the content that earns AI citations is content written to describe a situation a reader might be in, not content engineered to match a query a researcher might type. These often produce fundamentally different content. A service page optimised for search keywords may never be the best answer to the conversational query a buyer is actually asking. The AI will cite the one that answers the situation.

How to Research Conversational Queries

Conversational query research is a distinct practice from keyword research. It does not replace keyword research for traditional SEO — it runs alongside it, answering a different question: what are the real situations my buyers find themselves in, and how do they describe those situations when they ask for help?

The most direct method is the one most rarely used: ask. Talk to existing clients about the questions they were asking before they found you. Review sales call notes for the framing customers use — not the category label you use internally, but the way they describe the situation they are in. This is primary research. It is more valuable for AI query optimisation than any tool because it captures the natural language of genuine need.

Secondary methods include reading the query suggestions Perplexity generates when you search your category — these are drawn from real user behaviour. Read Reddit threads and LinkedIn discussions where your buyers describe their challenges, not for topic ideas but for the exact language they use when genuinely confused, genuinely evaluating, genuinely in the situation your business addresses.

The gap between keyword optimisation and conversational query optimisation explains why well-ranked sites fail to appear in AI citations. Ranking for the keyword is not the same as answering the situation.

Part Five: Two Pathways to AI Visibility — And Why Most Strategies Only Use One

The Retrieval Path

For real-time retrieval queries — the majority of commercial and research prompts — the AI system queries a search index, retrieves relevant documents, and constructs a response from what it finds. This is where Google and Bing indexation matter. Where content structure and extractability matter. Where entity infrastructure — consistent naming, schema with external links, Wikidata presence — matters. Where site performance and crawlability matter.

This is the SEO practitioner's terrain and it is legitimately important. It is also the only pathway that most AI visibility guidance addresses.

The Training Data Path

Some AI platforms — Claude, DeepSeek, and ChatGPT and others when not in real-time search mode — operate from training data ingested before the query was made. They are not retrieving documents at the moment of response. They are drawing on a model of the world built from text they were trained on. For these platforms, your Google rankings are irrelevant to whether the AI knows about your business. What matters is whether information about your business appeared in the sources they trained on — and whether those sources were credible enough to be incorporated with confidence.

Your SEO is optimising the retrieval path. Training data is what gets you named when AI is not searching at all.

Training data presence is a different investment thesis from SEO. It has a different timescale — months and years, not days and weeks. It has different inputs: editorial mentions in high-authority publications, academic citations, Wikidata entries with sourced statements, inclusion in the primary sources AI companies select for training data. And it has a different outcome: a business well-represented in training data gets named confidently across AI platforms regardless of whether the platform is retrieving in real time.

The businesses building training data presence in 2026 through consistent earned editorial coverage, entity infrastructure, and independent corroboration are building an advantage that compounds through every model update and every training cycle. The businesses that only think about retrieval are building a position that depends entirely on the AI choosing to search — which it does not always do.

Both pathways matter. Both require deliberate investment. The strategy that only addresses one is half a strategy.

Part Six: The Concentration Problem and the Cost of Delay

Where Citations Actually Come From

In December 2025, Muck Rack published updated findings from its What Is AI Reading? research programme — an analysis of more than one million links cited by AI models including Gemini, Perplexity, Claude and ChatGPT, conducted between July and December 2025. The findings: 94% of all citations come from non-paid sources, and earned media accounts for 82%.

It is worth being transparent about the source. Muck Rack produces Generative Pulse, a commercial tool for monitoring and improving brand visibility in AI responses. They have a product interest in earned media being seen as strategically important. That commercial context does not invalidate the finding — it is consistent with independent research from the University of Toronto conducted in September 2025, which found AI citing third-party authoritative sources 92.1% of the time in consumer electronics and 81.9% in automotive. Two independent datasets, different methodologies, same direction. The finding is robust.

What neither headline number surfaces is the concentration beneath it. AI citation rates are highest for content published within the first seven days of release, with more than half of all citations referencing material published within the prior eleven months. The outlets driving those citations are not evenly distributed. For most businesses, approximately 20 publications in their sector account for the majority of AI citation coverage.

Even knowing that AI relies heavily on earned media, the gap between where PR effort goes and where AI citations actually come from is striking. Only a 2% overlap exists between the journalists PR teams pitch most frequently and the journalists whose work AI models actually cite. That is not a quality problem. It is a measurement problem. PR investment is being targeted at reach and prestige. AI citation is determined by sector authority, editorial independence, and whether the publication sits in the AI system's trusted source set for that query type. These are different criteria. They produce different target lists.

The structure of cited content also differs meaningfully from content that is not cited. Cited press releases contain roughly twice as many statistics, 30% more action verbs, 2.5 times as many bullet points, and a 30% higher rate of objective sentences than those AI ignores. This is the same standard that makes a web page extractable operating at the level of press releases. Structure and specificity are not just on-page considerations. They determine what earns citation at every format level.

The Compounding Advantage — and What It Costs to Wait

AI citation is not a level playing field that resets each month. It is a compounding system.

The outlets AI trusts for your sector are forming into a recognised set. The businesses that appear consistently in those outlets are becoming familiar entities to AI systems — confidently named, reliably described, cited without hedging. The businesses absent from those outlets are invisible in AI responses even when they have stronger products, higher rankings, and larger marketing budgets.

This concentration is forming now, in 2026. The equivalent in traditional SEO was domain authority accumulation in the early 2010s — businesses that built strong editorial link profiles early created advantages that became progressively more expensive for later entrants to close. The same dynamic is forming in AI citation. The businesses that establish citation presence in the right 20 outlets for their sector this year are building a position that will be materially stronger in 2028. The businesses that wait until the citation hierarchy in their category is established are paying a higher price to enter a more closed system.

Every month an earned media programme is not running is a month that compounding advantage accrues to someone else.

Part Seven: The Voice and Ambient AI Layer — Already Here, Already Missed

Every AI visibility framework has a blind spot if it assumes queries happen in a chat window. They increasingly do not.

Copilot is embedded in Windows 11 and appears inside applications. Gemini is integrated into Android and Google's mobile apps. Siri is acquiring large language model capabilities across iOS and macOS. Amazon is rebuilding Alexa on LLM foundations. The query that happens when someone speaks to their phone while commuting, asks their laptop assistant a question mid-meeting, or uses an earpiece for a hands-free answer is not interacting with a browser tab.

Voice and ambient AI queries are more situational, more specific, and more likely to produce a single confident answer rather than a list — because the interface cannot show a list. It responds. The answer either answers the question or it does not. There is no opportunity to scroll past a vague response.

The content that performs in this context is the same content that performs in text-based AI citation, with less margin for ambiguity. Standalone opening answers. Named statistics with inline sources. Attributable claims. Explicit definitions. The extraction standard is not different — the tolerance for failure is lower.

This is not a future consideration. These interfaces are deployed and in active use. The strategy that accounts for them now is addressing a retrieval context that already exists and will only grow.

Part Eight: The Platform-Audience Stack

How AI Visibility Strategy Should Actually Be Built

The approach this guide describes — mapping AI platform selection to specific audience profile and use context before deciding where to invest — is a distinct diagnostic practice. It needs a name, because unnamed practices are not applied consistently, do not transfer between client conversations, and do not accumulate into a repeatable system.

The Platform-Audience Stack, developed by Sean Mullins at SEO Strategy Ltd, is the diagnostic framework for building AI visibility strategy from the audience out. It has five questions applied in sequence. The sequence is not optional — each answer shapes what follows.

Question One: Who is the buyer? The situational profile, not the demographic one. What industry, what size of organisation, what role, what technology environment, what constraints on tool access. A freelance designer and a compliance officer at a financial institution may both qualify as "marketing decision-makers" by a demographic definition. Their AI environments are completely different.

Question Two: Which AI do they have access to? Particularly in enterprise and regulated contexts: is this person in a Microsoft 365 environment? Does their organisation have an approved AI tool list? Is their AI tool a personal choice or an employer mandate? The answer here determines which retrieval layer matters. It is the most important single question in the Stack and the one most consistently skipped.

Question Three: What do they use AI for when they are in your category? Not what AI they use generally — what AI they reach for when doing the specific thing that connects to your business. Research, evaluation, shortlisting, drafting a brief. A professional might use Claude for complex writing, ChatGPT for quick research, and Copilot because it is already open in the next window. The session that matters to your strategy is the one where their task leads to your category.

Question Four: What language do they actually use? Not keyword language. Situational language. The constraints they are working within. The way they describe their problem to an AI assistant rather than to a search engine. This is answered by primary research — client conversations, sales call analysis, community observation — not by keyword tools.

Question Five: Which platforms are your competitors appearing on? Run the queries. Read the responses. Log every citation. This tells you where the competitive AI citation landscape already exists and where the gaps are. The gaps are the opportunity. The citations your competitors already hold are the compounding advantage you are not yet building.

From these five answers, a Platform-Audience Stack emerges: the prioritised set of platforms and retrieval surfaces that matter for this specific business, with primary investment focus and hygiene-level maintenance clearly distinguished.

The Platform-Audience Stack and CITATE: A Complete System

The Platform-Audience Stack answers one question: where do I need to appear?

CITATE answers the completing question: how do I ensure I am chosen once I appear there?

CITATE — the content citation framework developed by SEO Strategy Ltd — defines the threshold at which a page becomes extractable, evidenced, and attributable enough for AI systems to cite. Six criteria across three layers: Structure (standalone opening answer, explicit definition), Evidence (named statistic with inline source, named source), Identity (named entity, attributable claim). These are not new principles invented for the AI era. They are the application of what made great content great — the same structural discipline that produced the best SEO content of the last decade — operationalised for the specific extraction problem AI systems have in 2026.

EEAT described qualities worth having. It did not tell anyone how to produce them. CITATE is what EEAT should have been: specific, testable, applicable by any practitioner, grounded in the mechanism rather than the aspiration.

Together, Platform-Audience Stack and CITATE form a complete AI visibility system. Most guidance addresses part of one. These two frameworks together address the whole problem in sequence: first, know where your audience is. Then, ensure your content can be extracted and attributed when AI arrives there.

This is how AI visibility strategy is built now. Everything else is tactics without a map.

Part Nine: What to Do Now — Five Steps, No Softness

This section exists for one purpose: to move you from understanding the problem to acting on it. The five steps are sequenced. Skipping one does not accelerate the process — it undermines every step that follows.

Step One: Identify your buyer's environment.

Before any platform decision, determine whether your primary buyers are in consumer, SMB, or enterprise environments. If enterprise, establish whether they operate in Microsoft 365 environments. If you do not know, ask three existing clients what AI tools their organisation has approved. If the answer includes Copilot, your Bing strategy is not a parallel consideration — it is the primary one.

If you skip this step: you will spend the rest of this process optimising for the wrong platform with complete confidence.

Step Two: Run ten queries across four platforms.

Open ChatGPT, Gemini, Perplexity, and Copilot. Run ten queries that a buyer in your category would realistically ask — not your product name, but the problems your product solves, the situations it addresses, the comparisons it should appear in. Read every response. Log every citation. Note every competitor that appears where you do not.

This is not keyword research. It is AI citation research. It takes two hours and produces the most useful strategic information available to your business right now.

If you skip this step: you are making platform investment decisions without knowing what the current AI citation landscape in your category looks like.

Step Three: Fix your entity infrastructure before anything else.

Verify your business name is identical — not approximately identical — across your website, Google Business Profile, Bing Places, Companies House, Wikidata, LinkedIn, and every professional directory relevant to your sector. Ensure your Organisation schema markup includes sameAs links pointing to Wikidata and Companies House. Create a Wikidata entry if one does not exist. These fixes cost nothing. They remove the identity ambiguity that causes AI systems to hedge rather than name you.

If you skip this step: your content may be indexed, extractable, and earning editorial coverage — and AI will still say "there are firms that specialise in this" rather than naming you, because it cannot confirm which entity it is dealing with.

Step Four: Audit your Bing presence independently of Google.

Verify your site in Bing Webmaster Tools if you have not already. Submit your sitemap. Implement IndexNow. Run the crawl diagnostics and resolve every error. Check Bing keyword data for your category queries. This is a separate process from your Google Search Console review — Bing has its own index, its own crawl behaviour, and its own error patterns that do not surface in Google's tools.

If your buyers are in Microsoft 365 environments and you skip this step: you do not exist to them in AI search, regardless of how strong your Google performance is.

Step Five: Begin a targeted earned media programme for your 20 outlets.

Identify the publications that AI systems consistently cite when answering questions in your category. These are your 20. They are not the highest-circulation publications in your sector. They are the ones with established authority for your specific query type — knowable by running Step Two carefully. Build relationships with the journalists who write for them based on what makes content citable: statistics, named sources, objective framing, genuine news value. Not what makes a good traditional press release.

AI citation rates are highest for content published within the first seven days of release. Every month this programme is not running is a month that window closes for someone else.

If you skip this step: entity infrastructure and on-page extractability will take you to the threshold of AI visibility. Earned corroboration is what crosses it.


A structured audit of your current position across all four layers — entity foundation, content extractability, platform coverage, and citation infrastructure — is the fastest way to identify which of these five steps demands the most urgent attention for your specific business. The AI Visibility Audit exists for exactly that purpose.

The Principle That Does Not Change

AI platforms will continue to shift. Gemini's trajectory from late 2025 is not necessarily its trajectory in late 2026. ChatGPT's referral traffic dominance may not persist at current levels. Copilot's enterprise penetration will accelerate as Microsoft continues deploying it globally. New platforms will emerge. Some on this list will consolidate or diminish.

The principle underneath all of it will not change.

The businesses that appear most consistently in AI responses are the ones whose identity is unambiguous, whose content answers real situations with specificity and attribution, whose authority is independently confirmed, and whose platform presence matches their actual audience. This has always been how trust-based systems select who to recommend. AI has not invented a new game. It has made the existing game more legible — and made the cost of getting the foundation wrong more immediate.

The businesses that ask the audience question first, fix the entity infrastructure that allows AI to identify them with confidence, structure their content to be extracted and attributed, and build consistent earned coverage in the right publications — those businesses are building a compounding position. The businesses following generic AI visibility guidance without asking the audience question are spending money on visibility in places their buyers may never look.

The audience question is not optional. It is the strategy.

Ask it before you optimise anything. Let the answer determine where you go. And if you do not yet know which AI your customers are actually using — that is the starting point, and it is entirely knowable.


Sean Mullins is the founder of SEO Strategy Ltd, a Southampton-based consultancy specialising in AI-first visibility, entity SEO, and the practical application of structured content standards to the AI citation challenge. The Platform-Audience Stack diagnostic framework described in this guide is developed from client work across healthcare IT, legal services, SaaS, and B2B technology sectors. The CITATE framework — the content citation standard that determines whether AI systems can extract and attribute your content — is available at seostrategy.co.uk/citate-framework/. For a structured diagnosis of your current position across all four AI visibility layers, start with the AI Visibility Audit.

Key Definitions

Platform-Audience Stack
A diagnostic framework developed by Sean Mullins at SEO Strategy Ltd for building AI visibility strategy from the audience out. Five sequential questions determine which AI platforms matter for a specific business, which retrieval layers to prioritise, and in which order to invest. The Stack determines where a business needs to appear; CITATE determines how it is chosen once it appears there.
AI visibility environment
The combination of enterprise IT policy, approved AI tools, role requirements, and task context that determines which AI platform a buyer actually uses when researching in your category. Environment precedes platform: the correct platform to optimise for follows from understanding the buyer's working environment, not from aggregate referral traffic data.
Conversational query research
A research discipline distinct from keyword research, focused on identifying the natural-language situation descriptions buyers use when asking AI for help — as opposed to the compressed keyword phrases used in traditional search. Content written to describe a situation earns AI citations; content engineered to match keywords often does not.

Frequently Asked Questions

Which AI platform should I prioritise for B2B visibility?

The answer depends entirely on your audience environment, not on which platform has the most aggregate users. If your buyers are in Microsoft 365 enterprise environments — NHS, financial services, legal, government — Copilot is their default AI and it retrieves from Bing. For these audiences, Bing optimisation is a primary priority equal to or ahead of Google. For consumer audiences, Google and ChatGPT dominate. The Platform-Audience Stack diagnostic determines the correct priority for your specific situation.

Why does Bing matter for AI visibility if ChatGPT has 80% of AI referral traffic?

The 80% ChatGPT referral traffic figure measures clicks from AI responses to websites — it does not capture enterprise Copilot usage at all. Microsoft Copilot retrieves from Bing and is deployed at scale across Microsoft 365 environments in the NHS, financial services, legal, and government sectors. These buyers have Copilot as their mandated AI tool. When they research suppliers through Copilot, the retrieval layer is Bing. A site absent from Bing does not appear in those responses regardless of Google rankings.

What is the Platform-Audience Stack?

The Platform-Audience Stack is a diagnostic framework developed by Sean Mullins at SEO Strategy Ltd for building AI visibility strategy from the audience out. Five sequential questions: Who is the buyer? Which AI do they have access to? What do they use AI for when they are in your category? What language do they actually use? Which platforms are your competitors appearing on? Each answer shapes what follows. The Stack determines where a business needs to appear; CITATE determines how it is chosen once it appears there.

What is the difference between retrieval-based and training-data-based AI visibility?

Retrieval-based AI queries a live search index when generating a response — this is where Google and Bing indexation, content extractability, and entity infrastructure matter. Training-data-based AI draws from knowledge ingested during model training, before the query was made. For training-data platforms, Google rankings are irrelevant. What matters is whether the business appeared in high-authority sources included in training data — editorial coverage, Wikidata entries, academic citations. Most AI visibility strategies only address retrieval. Both pathways require deliberate investment.

How do I find the 20 outlets that drive AI citation in my sector?

Run your top commercial queries through ChatGPT, Perplexity, Gemini, and Copilot. Read every response. Log every citation. Note which publications appear consistently when AI answers questions in your category. These are your 20. They are not necessarily the highest-circulation publications — they are the ones with established authority for your specific query type. Only a 2% overlap exists between the journalists PR teams typically pitch and the journalists whose work AI actually cites, which means most PR investment is targeting the wrong outlets.

Sean Mullins

Founder of SEO Strategy Ltd with 20+ years in SEO, web development and digital marketing. Specialising in healthcare IT, legal services and SaaS — from technical audits to AI-assisted development.

Ready to improve your search visibility?

Book a free 30-minute consultation and let's discuss your SEO strategy.

Get in Touch
Platform-Audience Stack  ·  SEO Strategy Ltd  ·  2026

Which AI Is Your Buyer Actually Using?

The answer should drive your entire AI visibility strategy. This matrix maps the AI your buyer uses — by audience type and sector — with the single highest-leverage action. Hover any cell. Filter by platform to reveal the Copilot story.

Filter:
Audience ↓  ·  Sector →
General
B2B Tech
Legal
Healthcare
Financial
Government
ConsumerPersonal choice, no mandate
SMBOwner-led, personal choice
EnterpriseMixed, some mandates
Regulated EnterpriseNHS · Finance · Legal · GovCopilot mandated
Platform key
ChatGPTPersonal choice · broadest reach
CopilotRetrieves from Bing · Microsoft mandate
GeminiGoogle ecosystem · mobile-first
PerplexityCitation-conscious · sourced answers
Primary Action
Why It Matters
Click cell to pin · click X or outside to close