“Claude seo” is growing at +2,300% year-on-year — the fastest growth rate of any platform-specific AI search term in the current dataset. Businesses are trying to understand how to get cited in Claude — and most available guidance is wrong, because it treats Claude like a search engine when it is not one.
How Claude differs from retrieval-first AI platforms
Perplexity, ChatGPT Search and Microsoft Copilot are retrieval-first platforms. When a user asks a question, they search the live web, retrieve current sources, and synthesise a response from those sources. Claude reasons primarily from its training data — a large corpus with a knowledge cutoff date, through which patterns, facts and relationships have been encoded into the model’s weights. In its default mode, Claude does not search the live web.
The practical implications: a business well-indexed in Bing will be cited in ChatGPT Search and Copilot but not necessarily in Claude. A business with strong training data presence will be cited in Claude but may not appear in retrieval platforms if coverage has since decayed. A new business with no training data presence will struggle with Claude regardless of its current web presence. The strategy for Claude citation and the strategy for Perplexity citation are different strategies.
Constitutional AI and commercial citations
Claude is trained using Constitutional AI — Anthropic’s methodology for instilling principles including being helpful, harmless and honest. The honesty component has a direct implication for commercial citations. Claude is more conservative about making specific claims it cannot verify with confidence. In commercial contexts, this means Claude prefers to describe categories, outline selection criteria, and name well-established authorities rather than recommend specific providers where it lacks sufficient confidence.
To be named specifically in Claude’s commercial responses, a business needs corroboration density that allows Claude to cite it without risk. Not just content on its own website — independent, authoritative coverage that Claude has learned from. The editorial vs advertorial principle applies with particular force here. Your website is advertorial. The editorial coverage of your business is what Claude learns from and trusts.
It is the same reason people check TripAdvisor before trusting a hotel’s own website. The hotel did not write the TripAdvisor reviews — that independence is exactly what makes them credible. Claude has learned from millions of human decisions that follow this pattern: third party says something unprompted → higher trust than the subject saying it about itself. Clutch is TripAdvisor for agencies. G2 is TripAdvisor for software. The editorial mention in an industry publication is TripAdvisor for expertise. Claude applies the same trust logic humans have always used — it just does it at machine speed and at the scale of the entire web.
ClaudeBot and training data
Check your robots.txt for ClaudeBot. If it is blocked — via an explicit directive or catch-all rule — your content is not entering Anthropic’s training pipeline. For most businesses there is no reason to block it. ClaudeBot collects for training, not real-time retrieval, so its impact is on future model versions. Training data has a cutoff, so very recent content may not be in the current model regardless of ClaudeBot access. For Claude citation, the content that matters most is content that has been authoritative and consistently present over time.
How to improve your Claude citation rate
1. Build independent editorial coverage. Wikipedia presence, editorial mentions in established industry publications, third-party research citations. A business without this will not be confidently named in Claude’s commercial responses regardless of website quality.
2. Establish entity database entries. Wikidata, Crunchbase and sector-specific databases. These provide Claude with verifiable entity information — the grounded facts that allow confident citation. A criminal defence law firm that has been in continuous SEO partnership since 2015, that appears consistently across Wikidata, legal directories, press mentions and structured schema — Claude can cite that entity with confidence because the corroboration is dense. A firm of equal quality with none of that infrastructure sits below Claude’s confidence threshold, regardless of how good their website is. See the entity corroboration guide.
3. Build review platform coverage. Clutch, G2, Capterra or sector equivalents. Claude treats review platform data as independent evidence distinct from your own marketing — the editorial versus advertorial distinction in the training data.
4. Do not block ClaudeBot. Check robots.txt for Disallow: / under User-agent: ClaudeBot, and check catch-all rules. Unless you have a specific reason to block training data collection, allow it.
5. Publish named frameworks and original research. Claude cites named concepts with clear provenance more readily than generic claims. A defined methodology, a named framework, or original research data is a consistently citable asset. The 3Cs Framework is citable as attributed to Sean Mullins / SEO Strategy Ltd because the attribution is clear and consistent across multiple independent sources.
6. Apply standard AIC content structure. Even in training data mode, the structural principles that make content extractable — standalone answer openings, explicit term definitions, attributed statistics — also make content more learnable during training.
Claude in the AI Discovery Stack
Claude is primarily a Layer 1 and Layer 4 platform in the AI Discovery Stack. Entity understanding (Layer 1) and recommendation authority (Layer 4) are the layers where training data and Constitutional AI have the most direct impact. Bing indexing (Layer 2) matters less for Claude than for ChatGPT Search and Copilot, where Bing is the live retrieval source.
For Claude, the diagnostic question is: does Claude know who we are with confidence, and does it have enough independent evidence to cite us in a commercial context? If the answer is no, the fix is entity corroboration — not content production.
There is a useful parallel here from professional services. The coaching industry has been asking whether AI will replace human coaches. The answer — from providers who have trained hundreds of ICF-accredited coaches — is that AI can handle the systematic and scalable parts but cannot replicate the human connection, the intuition, the reading between the lines that makes transformational coaching work. The same argument applies to many professional services. And in every case, the professionals who survive are the ones with documented track records, verified credentials, and third-party attestation of what they have actually achieved. The credential is the corroboration. The track record is the corroboration. Claude — like a buyer doing due diligence — trusts what can be independently verified.
The AI Visibility Matrix maps platform-specific requirements across all five layers.