A marketing director at a mid-sized software company opens ChatGPT. She types: which managed file transfer vendors should we evaluate for our compliance requirements?
She does not open Google. She does not type a keyword. She asks a question, the way she would ask a trusted colleague, and she expects a useful answer back.
The AI system runs its own evaluation process — silently, before she sees a single result. It retrieves what it knows. It weighs sources. It applies confidence thresholds. It generates a shortlist. She sees the output: three or four vendors, described with enough specificity to seem authoritative.
Your company may or may not be on that list.
What determined the outcome had nothing to do with how good your website is. It had nothing to do with your keyword targeting, your meta descriptions, your blog publishing frequency, or whether your H1 tags are correctly structured. It was determined by something much harder to manufacture: what independent sources said about you, and how confidently an AI system could cite you without risking being wrong.
This is the shift. And most businesses have not understood it yet.
The collapse of the information moat
For twenty years, the scarce resource in digital marketing was content. Good content took time, expertise, and money to produce. That scarcity created a signal — a site with comprehensive, well-structured content on a topic was probably authoritative. The SEO industry grew up around exploiting and refining that signal.
Then content became free.
Not cheaper. Not more efficient to produce. Free. Anyone with access to a large language model can generate five thousand words on any topic in six minutes. The words will be grammatically correct, reasonably coherent, and largely indistinguishable from professionally produced content to a casual reader. The information moat — the thing that took years and budgets to build — can now be replicated in an afternoon.
Which means it can no longer be the signal.
When a signal becomes easy to fake, it stops being a signal. Google understood this when it started discounting thin content after Panda. It understood it again when it began weighting E-E-A-T — Experience, Expertise, Authoritativeness, Trustworthiness — signals that require something beyond text generation to establish. The trajectory has been consistent: as each layer of the information advantage gets commoditised, the next layer becomes the differentiator.
When content becomes infinite, the only scarce signal left is independent verification.
This is the Verification Economy: when information is free, the economy shifts to verifying it. And in the Verification Economy, the signals that matter are not the ones you produced — they are the ones you earned.
We are now at the layer that genuinely cannot be commoditised.
The TripAdvisor Principle
Here is the insight that everything else in this piece builds on.
I call it the TripAdvisor Principle. Here is the definition: The most credible signal about a business is independent verification from sources the business cannot control.
Think about why you check TripAdvisor before booking a restaurant. Not because TripAdvisor has more information than the restaurant’s own website. The restaurant’s website has professional photography, curated menus, carefully written descriptions of the atmosphere. TripAdvisor has typos and photos taken on mobile phones.
But TripAdvisor is more credible. Because the restaurant did not write it.
The independence is the signal. A review that the subject could not control, could not edit, could not commission, carries a different kind of weight than anything the subject says about itself. This is not a new insight — it is how trust has worked in human society for as long as there have been reputations to evaluate.
| Signal type | Who controls it | AI trust weight |
|---|---|---|
| Your website content | You | Low |
| Paid or sponsored coverage | You + publisher | Medium |
| Editorial mention | Independent publication | High |
| Third-party reviews | Your clients | High |
| Academic or research citation | Independent researcher | Very high |
AI systems — Claude, ChatGPT, Perplexity, Copilot — were trained on the web. Which means they were trained on the accumulated output of human trust decisions. They learned what humans learned: editorial sources over advertorial ones. Third-party mentions over self-declarations. Verified track records over claimed ones. Independent citations over owned content.
Your website is advertorial. You wrote it. You control it. You published every word of it. An AI system processing it applies the same discount a reader applies to a brochure.
What AI systems trust is the editorial layer. The Wikipedia entry. The industry publication that wrote about you because you were worth writing about, not because you paid for a sponsored post. The Clutch review from a client who had no incentive to say something nice. The academic citation. The LinkedIn recommendation from a peer who was not asked to give one. These are the signals that cannot be manufactured — because manufacturing them requires the cooperation of parties who have no obligation to cooperate.
This is what entity corroboration means in practice. Not a technical exercise in schema markup, though that matters. It is the process of building an independent verification layer around your business — a network of third-party signals that allows AI systems to cite you with confidence rather than hedging. This is why the AI Discovery Stack begins with entity corroboration rather than keyword targeting.
The accountability gap
The legal profession provides the clearest lens for understanding why the human layer persists even when AI can replicate the information-processing function.
When a lawyer charges £300 an hour to draft a contract, a significant portion of what you are paying for is not the drafting. It is the liability. If the contract is wrong, the lawyer is professionally accountable. Their indemnity insurance is on the line. Their regulatory standing is on the line. Their reputation — built through years of documented outcomes — is on the line. That accountability is the thing that makes the advice trustworthy, not just useful.
An AI system that drafts the same contract carries none of that accountability. If it is wrong, the user bears the consequence. Which creates a fundamental asymmetry in the trust calculation — and it matters most for the people who can least afford to discover the error.
There is a genuine access-to-justice argument here. AI legal tools give people who could not afford £300 an hour access to something that approximates good legal guidance. That is an improvement in the world — providing better than nothing to people who would otherwise have nothing is a net good, even if the advice is imperfect.
But “better than nothing” is not the same as “safe to rely on without verification.” The accountability gap is real, and it does not close as AI becomes more capable. If anything it widens — because as AI systems become more sophisticated, the outputs become more convincing, which makes the errors harder to identify and more costly when they surface.
The human governance layer earns its cost not by processing information better than AI can, but by bearing responsibility when the processing goes wrong. That responsibility requires a human who can be held accountable — which requires trust infrastructure that has been independently verified over time.
What AI cannot replicate
The coaching industry has been asking this question honestly: will AI replace human coaches?
The answer from practitioners who have trained hundreds of ICF-accredited coaches is instructive. AI can handle the systematic parts — tracking patterns, suggesting frameworks, providing prompts based on stated goals. What it cannot do is read the silence. Notice what is not being said. Respond to the specific human in the room, not the average human across the training dataset. Be genuinely present with someone navigating something genuinely difficult.
This is not a temporary technical limitation. It reflects something structural about what certain kinds of value require. Transformational coaching works because a real person shows up, brings their accumulated judgement about human complexity, and takes responsibility for the relationship. Remove the human and you have not just downgraded the tool — you have changed what the thing is.
The same structure applies across knowledge professions. The SEO consultant, the solicitor, the coach, the architect — what survives automation is not the information processing. It is the judgement under uncertainty, the accountability for outcomes, and the trust that accumulates from a documented track record of being right at real cost.
A hand-coded website built for a Portsmouth dog walker in 2009 has held the number one position for its primary commercial term for seventeen years. Not through continuous intervention — through getting the foundational signals right and being consistently present. That is what a track record looks like. It is not something an AI can generate on demand, because it requires time, consistency, and outcomes that were documented by the market, not by the consultant.
An outdoor living brand that grew from near-invisibility to seven-figure national turnover through organic search did so because someone understood the semantic space buyers were navigating and built the right architecture to own it. The AI Overview now cites that brand. The trust signals that made that possible — the entity infrastructure, the corroborated reviews, the editorial coverage, the documented case study — took years to build. They cannot be backdated.
The atrophy problem — and why it reinforces the principle
There is a dimension to this shift that is genuinely concerning, and sits outside the usual SEO conversation.
The people using AI as a replacement for thinking rather than an amplifier of it are degrading the very capability that would let them evaluate whether the AI is wrong.
Spell-checkers reduced active spelling fluency. GPS reduced spatial navigation ability. Calculators reduced arithmetic fluency. Each was a narrow cognitive offload with limited second-order consequences. What is happening now is different: the offloading is happening at the level of reasoning itself. Not arithmetic. Not spelling. The capacity to evaluate a claim, weigh evidence, and reach a conclusion under uncertainty.
The feedback loop is uncomfortable. To verify AI output, you need the reasoning capability that AI use is gradually eroding. The people who will navigate this well are the ones who treat AI the way a good architect treats structural analysis software: it handles the calculation, but the judgement about what to build, why, and whether the output makes sense is still theirs. That judgement is what experience builds. It is also what compounds — and what cannot be replaced.
There is a second-order effect worth noting. The people most at risk from AI misinformation are the ones who have offloaded the reasoning required to evaluate it. Which means the trust signals become more important, not less — because when critical thinking is degraded, you rely on the credibility infrastructure of the sources you are using. The TripAdvisor Principle operates for individual users the same way it operates for AI systems: you trust what you cannot have written yourself.
AIs evaluating AIs
Here is the dimension most people are not yet accounting for.
AI systems already evaluate other AI systems. The retrieval pipelines that determine what gets cited are making assessments of credibility — distinguishing editorial from advertorial, independent from self-generated, verified from claimed. As AI-generated content floods every surface, those evaluation systems will get more sophisticated, not less.
The result is predictable: undifferentiated AI-generated content gets downweighted. Content with verifiable human authorship, documented provenance, independent citation, and consistent entity signals gets preferenced. The signal that survives is the one that cannot be manufactured — because it required the cooperation of parties who had no obligation to cooperate.
This is why the principle that has held for twenty years of SEO holds more firmly now than ever: strong brands rank and dominate. You are who you hang with. The contextual link from an independent authority means more than a hundred links from sites that owe you something. The editorial mention in a publication that was not paid to mention you means more than a thousand words of your own content.
Those principles were true when Google was the only game in town. They are more true now that AI systems are making trust assessments simultaneously across the web. And they will be more true still in 2030, when agent-mediated procurement means an AI is evaluating your business before a human ever sees your name. What I call the Trust Infrastructure Layer — the network of independent verification signals that sits above content, entities and links — is the asset that compounds fastest and is hardest to displace.
The three signals that cannot be manufactured
The practical implication of all of this reduces to three signals.
Editorial mentions. Coverage in publications and platforms that have no obligation to cover you, from writers who have no financial relationship with you. This is the signal that tells AI systems you are worth knowing about. Not a press release. Not a sponsored feature. The piece that ran because the editor decided it should run.
Independent reviews. The TripAdvisor layer — what clients said about you when they had no incentive except honesty. Clutch for agencies. G2 for software. The legal directory for law firms. Not because the AI system needs to read the reviews, but because their existence confirms that your business has operated at real cost, with real clients, with real consequences.
Documented outcomes. Specific, attributed, verifiable results. Not “we help enterprises improve efficiency” — but “we helped this specific business achieve this specific outcome, documented here, verifiable there.” An AI system asked about you in 2030 will be able to fabricate a generic description of what your category does. What it cannot fabricate is your specific track record. That is the permanently retrievable asset.
These three signals share a common property: they require the cooperation of parties who have no obligation to help you. Which is exactly why they are credible. And exactly why they cannot be manufactured on demand.
The question you should be asking
The question most businesses are asking about AI is: how do we optimise for AI search?
It is the wrong question. It assumes AI discovery works the same way traditional search worked — that if you apply the right techniques to the right content, visibility follows. Some of that is true. But the most important question is different:
When an AI system is asked about us, can it cite us confidently? And if not, why not?
The answer to “why not” is almost never “we need more content.” It is almost always some version of: the independent verification layer is too thin. The AI system has your words. It does not have enough third-party attestation to use those words with confidence.
That is the gap. And closing it requires building something that cannot be shortcut, cannot be automated, and cannot be manufactured on demand.
The moat has not disappeared. It has moved. From information to verification. From content to credibility. From what you say about yourself to what others say about you unprompted.
That is the TripAdvisor Principle. And it changes everything about how visibility works.