Complete Guide

Claude SEO: How to Get Your Brand Cited in Anthropic’s Claude

Claude is Anthropic's AI assistant — with "claude seo" growing at +2,300% year-on-year, it is becoming a commercial discovery surface businesses cannot ignore. This guide explains how Claude retrieves and cites sources, how ClaudeBot crawls the web, how Constitutional AI affects what Claude recommends, and what you can do about it.

5 min read 1,046 words Updated Apr 2026

Claude is Anthropic's AI assistant — used via Claude.ai, the Claude app, embedded in enterprise workflows, and deployed as the AI layer inside third-party products. Unlike ChatGPT Search or Microsoft Copilot, Claude's primary citation mode is not live web retrieval. Claude reasons predominantly from its training data, supplemented in some modes by web search. This fundamental difference means the strategy for Claude citation is distinct from the strategy for Perplexity, ChatGPT Search or Copilot — and most businesses are not accounting for that distinction.

+2,300% year-on-year growth in searches for "claude seo" (UK and US) from 90 monthly searches — the fastest growth of any platform-specific AI search term in the dataset, confirming Claude is becoming a commercial discovery surface that marketing teams are actively trying to understand Google Keyword Planner, March 2025–February 2026, 2026
+508% year-on-year growth in searches for "claude" itself — 1.5 million monthly searches in UK and US, growing at five times the rate — the fastest-growing branded AI search term in the dataset, confirming platform-level reach comparable to Perplexity Google Keyword Planner, March 2025–February 2026, 2026

“Claude seo” is growing at +2,300% year-on-year — the fastest growth rate of any platform-specific AI search term in the current dataset. Businesses are trying to understand how to get cited in Claude — and most available guidance is wrong, because it treats Claude like a search engine when it is not one.

How Claude differs from retrieval-first AI platforms

Perplexity, ChatGPT Search and Microsoft Copilot are retrieval-first platforms. When a user asks a question, they search the live web, retrieve current sources, and synthesise a response from those sources. Claude reasons primarily from its training data — a large corpus with a knowledge cutoff date, through which patterns, facts and relationships have been encoded into the model’s weights. In its default mode, Claude does not search the live web.

The practical implications: a business well-indexed in Bing will be cited in ChatGPT Search and Copilot but not necessarily in Claude. A business with strong training data presence will be cited in Claude but may not appear in retrieval platforms if coverage has since decayed. A new business with no training data presence will struggle with Claude regardless of its current web presence. The strategy for Claude citation and the strategy for Perplexity citation are different strategies.

Constitutional AI and commercial citations

Claude is trained using Constitutional AI — Anthropic’s methodology for instilling principles including being helpful, harmless and honest. The honesty component has a direct implication for commercial citations. Claude is more conservative about making specific claims it cannot verify with confidence. In commercial contexts, this means Claude prefers to describe categories, outline selection criteria, and name well-established authorities rather than recommend specific providers where it lacks sufficient confidence.

To be named specifically in Claude’s commercial responses, a business needs corroboration density that allows Claude to cite it without risk. Not just content on its own website — independent, authoritative coverage that Claude has learned from. The editorial vs advertorial principle applies with particular force here. Your website is advertorial. The editorial coverage of your business is what Claude learns from and trusts.

It is the same reason people check TripAdvisor before trusting a hotel’s own website. The hotel did not write the TripAdvisor reviews — that independence is exactly what makes them credible. Claude has learned from millions of human decisions that follow this pattern: third party says something unprompted → higher trust than the subject saying it about itself. Clutch is TripAdvisor for agencies. G2 is TripAdvisor for software. The editorial mention in an industry publication is TripAdvisor for expertise. Claude applies the same trust logic humans have always used — it just does it at machine speed and at the scale of the entire web.

ClaudeBot and training data

Check your robots.txt for ClaudeBot. If it is blocked — via an explicit directive or catch-all rule — your content is not entering Anthropic’s training pipeline. For most businesses there is no reason to block it. ClaudeBot collects for training, not real-time retrieval, so its impact is on future model versions. Training data has a cutoff, so very recent content may not be in the current model regardless of ClaudeBot access. For Claude citation, the content that matters most is content that has been authoritative and consistently present over time.

How to improve your Claude citation rate

1. Build independent editorial coverage. Wikipedia presence, editorial mentions in established industry publications, third-party research citations. A business without this will not be confidently named in Claude’s commercial responses regardless of website quality.

2. Establish entity database entries. Wikidata, Crunchbase and sector-specific databases. These provide Claude with verifiable entity information — the grounded facts that allow confident citation. A criminal defence law firm that has been in continuous SEO partnership since 2015, that appears consistently across Wikidata, legal directories, press mentions and structured schema — Claude can cite that entity with confidence because the corroboration is dense. A firm of equal quality with none of that infrastructure sits below Claude’s confidence threshold, regardless of how good their website is. See the entity corroboration guide.

3. Build review platform coverage. Clutch, G2, Capterra or sector equivalents. Claude treats review platform data as independent evidence distinct from your own marketing — the editorial versus advertorial distinction in the training data.

4. Do not block ClaudeBot. Check robots.txt for Disallow: / under User-agent: ClaudeBot, and check catch-all rules. Unless you have a specific reason to block training data collection, allow it.

5. Publish named frameworks and original research. Claude cites named concepts with clear provenance more readily than generic claims. A defined methodology, a named framework, or original research data is a consistently citable asset. The 3Cs Framework is citable as attributed to Sean Mullins / SEO Strategy Ltd because the attribution is clear and consistent across multiple independent sources.

6. Apply standard AIC content structure. Even in training data mode, the structural principles that make content extractable — standalone answer openings, explicit term definitions, attributed statistics — also make content more learnable during training.

Claude in the AI Discovery Stack

Claude is primarily a Layer 1 and Layer 4 platform in the AI Discovery Stack. Entity understanding (Layer 1) and recommendation authority (Layer 4) are the layers where training data and Constitutional AI have the most direct impact. Bing indexing (Layer 2) matters less for Claude than for ChatGPT Search and Copilot, where Bing is the live retrieval source.

For Claude, the diagnostic question is: does Claude know who we are with confidence, and does it have enough independent evidence to cite us in a commercial context? If the answer is no, the fix is entity corroboration — not content production.

There is a useful parallel here from professional services. The coaching industry has been asking whether AI will replace human coaches. The answer — from providers who have trained hundreds of ICF-accredited coaches — is that AI can handle the systematic and scalable parts but cannot replicate the human connection, the intuition, the reading between the lines that makes transformational coaching work. The same argument applies to many professional services. And in every case, the professionals who survive are the ones with documented track records, verified credentials, and third-party attestation of what they have actually achieved. The credential is the corroboration. The track record is the corroboration. Claude — like a buyer doing due diligence — trusts what can be independently verified.

The AI Visibility Matrix maps platform-specific requirements across all five layers.

Key Definitions

ClaudeBot
Anthropic's web crawler — identified in server logs as ClaudeBot — used to collect web content for AI training data. Controlled via robots.txt. Unlike Bingbot (which feeds live retrieval for ChatGPT Search and Copilot), ClaudeBot content is used for model training rather than real-time retrieval.
Constitutional AI
Anthropic's training methodology that uses a written set of principles to guide Claude's responses toward being helpful, harmless and honest. The honesty principle creates citation conservatism: Claude prefers claims it can make with high confidence, which affects how readily it names specific commercial providers.
Training data versus live retrieval
Training data is knowledge encoded into a model during training with a fixed cutoff date. Live retrieval is knowledge fetched from the web at query time. Claude's primary mode is training data knowledge. The distinction matters strategically: training data presence requires a different approach from retrieval optimisation.

How to Improve Your Brand Visibility in Claude

A practical sequence for building the training data presence and independent corroboration that Claude citation requires.

  1. 1

    Check your robots.txt for ClaudeBot

    Verify that ClaudeBot is not blocked in your robots.txt. Check for Disallow: / under User-agent: ClaudeBot, and catch-all rules (User-agent: *) that might inadvertently block it. Unless you have a specific reason to block training data collection, allow ClaudeBot access.

  2. 2

    Test your current Claude citation rate

    Run key commercial queries through Claude.ai: "which [category] providers should I consider for [use case]?" Note whether your business is named. Compare with the same queries in Perplexity and ChatGPT Search. Appearing in retrieval platforms but not Claude indicates a corroboration density gap. Appearing nowhere starts at Layer 1 of the AI Discovery Stack.

  3. 3

    Audit your independent editorial coverage

    Identify the authoritative editorial sources in your category. Map where your business is mentioned versus where competitors are mentioned. The gap is your training data presence gap. Closing it requires editorial PR strategy, not additional website content.

  4. 4

    Complete your entity database entries

    Check Wikidata, Crunchbase and sector-specific databases. These provide the verifiable entity information that allows Claude to cite with confidence. See the entity corroboration guide for the full implementation sequence.

  5. 5

    Establish review platform presence in your category

    Create and populate your profile on the primary review platforms for your category (Clutch for agencies, G2 or Capterra for software). Request reviews from clients. These are independent evidence — the difference between editorial and advertorial in the training data — that Claude weights significantly.

Frequently Asked Questions

Does Claude use live web search to answer questions?

By default, no. Claude reasons primarily from its training data — knowledge encoded during model training with a fixed cutoff date. Some Claude versions and enterprise deployments support live web search via tool use, but this is not the standard behaviour most users encounter. This makes Claude fundamentally different from Perplexity, ChatGPT Search and Copilot, which are retrieval-first platforms. Training data presence and independent corroboration matter more for Claude citation than Bing indexing or content freshness.

What is ClaudeBot and should I allow it on my site?

ClaudeBot is Anthropic's web crawler, used to collect content for AI model training. It is controlled via robots.txt. For most businesses, there is no reason to block it: allowing ClaudeBot means your content may be included in future Claude training data. Unlike Bingbot (which feeds live retrieval for ChatGPT Search and Copilot), ClaudeBot content is used for training — the impact is on future model versions rather than immediate citation behaviour.

Why does Claude not recommend my business even though I have good SEO?

Claude's citation conservatism under Constitutional AI means it needs independent evidence before confidently naming a specific provider in a commercial context. Strong organic rankings and good website content are insufficient — they are self-declared (advertorial). Claude requires independent corroboration: editorial coverage, review platform presence, entity database entries. The entity corroboration guide explains the remediation sequence.

How is optimising for Claude different from optimising for Perplexity?

Perplexity is retrieval-first — it searches the live web and cites from current pages. Bing indexing, PerplexityBot access and content structure are the primary levers. For Claude, live retrieval is not the default — training data presence and independent corroboration density are more important. A business well-optimised for Perplexity may still be absent from Claude if its editorial coverage is thin.

What is Constitutional AI and how does it affect citation?

Constitutional AI is Anthropic's training methodology using a written set of principles to guide Claude toward being helpful, harmless and honest. The honesty principle creates citation conservatism: Claude prefers not to make specific claims it cannot confidently verify. In commercial contexts, Claude is more likely to describe categories and criteria than to name specific providers — unless those providers have sufficient independent corroboration.

Does Claude cite sources the way Perplexity does?

Not in the same way. Perplexity provides numbered source citations with every answer, making retrieval transparent. Claude's default mode is reasoning from training data — it does not provide source footnotes because it is not performing live retrieval. In Claude versions with web search enabled, source citations may be provided. This structural difference is why the Claude citation optimisation strategy is different from the Perplexity strategy.

Sean Mullins

Founder of SEO Strategy Ltd with 20+ years in SEO, web development and digital marketing. Specialising in healthcare IT, legal services and SaaS — from technical audits to AI-assisted development.

Ready to improve your search visibility?

Book a free 30-minute consultation and let's discuss your SEO strategy.

Get in Touch