The most sophisticated personalisation system ever built collapsed within a month of launch. The reason has nothing to do with AI capability — and everything to do with a problem most businesses haven’t even identified yet.
In the summer of 2025, Google quietly launched Daily Hub on the Pixel 10. By any technical measure it was extraordinary — three parallel personalisation engines, Gemini orchestration, hierarchical embeddings, real-time ambient scoring. A system designed to anticipate what you want before you’ve thought to ask for it.
Within a month, Google suspended it.
Not because Gemini wasn’t capable. Not because the personalisation logic was wrong. It collapsed because the output was embarrassing. A system of that sophistication was recommending “belly dance finger cymbals” to technology and SEO professionals. Suggestions so disconnected from reality they bordered on surreal.
The post-mortem — inferred from observed system behaviour, alignment with Google’s published patent history, and architecture patterns consistent across its Knowledge Graph and Gemini-powered systems — revealed something that should stop every marketing director, CEO and technology buyer in their tracks. Because it describes exactly the problem with AI investment decisions being made right now, across every industry, at enormous cost.
The estate agent with no deeds
Imagine hiring the best estate agent in the country. Sharp, experienced, knows every serious buyer in the market. You put them on your most important listing.
Then you hand them a property with no address on record, no title deeds, no planning history, no EPC certificate, no photos, no comparable sales data. Nothing independently verifiable.
They cannot sell what they cannot verify. Their capability is completely irrelevant to your outcome. The agent is world-class. The documentation is non-existent. The result is silence — or worse, a botched recommendation that damages your position.
AI doesn’t choose the best business. It chooses the safest one to recommend.
That is precisely what most businesses are doing with AI right now. The most capable recommendation engines ever built are ready to name providers, shortlist vendors, and suggest services to buyers who are actively looking. And they are producing silence or absurdity — not because the AI isn’t good enough, but because the businesses they should be recommending haven’t given them anything solid to work with.
What the architecture actually reveals
While Google has not formally published a full technical breakdown of Daily Hub, the inferred architecture aligns with patterns visible across its patents, Knowledge Graph design, and Gemini-powered systems — and it tells us something important about how all modern AI recommendation systems operate.
At a system level, every AI recommendation engine is solving the same problem: selecting a real-world entity under uncertainty. The quality of that selection is determined less by model sophistication and more by the reliability of the entity signals available to it.
The pattern is consistent. There is a memory layer — a dual index running simultaneously. One for content. One for entities — every named thing extracted from that content (organisations, people, products, concepts) each with their own confidence score and type classification. Most businesses only have the first. The systems making recommendations need both.
This wasn’t a single-point failure. Systems like this fail through a combination of weak entity grounding, insufficient behavioural context, and feedback loop instability. But the critical constraint — the one that determines whether outputs are trustworthy at all — is the strength of the underlying entity signal. Without it, capability becomes irrelevant.
The AI recommendation pipeline
Every modern AI system making commercial recommendations follows a version of the same pipeline:
1. Retrieve — candidate sources are pulled from the index based on query relevance. 2. Validate — entities within those sources are matched against structured knowledge. 3. Score — confidence is assigned based on corroboration across multiple sources. 4. Ground — claims are checked against verified entity data. 5. Synthesise — a response is constructed from what passes the confidence threshold.
If your business is weak at step 2 or 3 — if entity validation finds inconsistent data, or corroboration scoring finds only one source (your own website) — you never make it into the final output. Regardless of how good your content is. Regardless of how much you’ve invested in the tool sitting on top.
This is the insight most AI investment conversations miss entirely. The conversation focuses on step 5 — the synthesis, the visible output. The constraint is almost always at step 2 or 3.
The part most businesses haven’t realised yet
Here is the uncomfortable reality. This isn’t a future problem. It’s already happening.
Right now, buyers are asking AI systems: “Who are the best providers in this category?” “What software should we use for this use case?” “Which companies specialise in this?” And those systems are producing answers. They are retrieving. They are validating. They are scoring. They are recommending.
The only question is whether you are in the candidate set.
Most businesses assume they are — because they rank in Google, because they have a strong website, because they’ve invested in content. But the AI system is not asking: “Who has the best website?” It is asking: “Which entities can I verify with enough confidence to recommend?”
If your entity signal is weak, inconsistent, or uncorroborated, you are not being ranked lower. You are not being outranked. You are not being considered at all.
And when that happens, the system does not return an empty answer. It recommends someone else.
Most businesses think they are competing for visibility. They are actually competing for eligibility.
The pattern you’ve already experienced
This failure mode isn’t new. It just has a new face and vastly higher commercial stakes.
You’ve seen it with Amazon — recommending a second lawnmower immediately after you’ve bought one. The capability is there. The contextual signal that you’ve already solved the problem isn’t. You’ve seen it with Netflix — the endless carousel of things adjacent to something you watched once, three years ago. You’ve seen it with LinkedIn’s “People You May Know” — strangers in entirely different industries, because the entity relationships in your profile are thin and the system is filling gaps with inference.
In every case, the failure mode is identical: sophisticated capability, insufficient structured input, confident but wrong output. Whether recommending a film or a forensic accounting firm, the system is resolving the same problem: selecting an entity from incomplete, probabilistic data under uncertainty. The process is the same. The consequences of getting it wrong are not.
The Entity Confidence Model
What determines whether an AI system can recommend your business with confidence comes down to five things — the Entity Confidence Model:
1. Entity presence — does a structured, machine-readable definition of your business exist, independent of your own website? Are you a recognised named entity in the knowledge databases AI systems query directly? 2. Cross-source corroboration — do multiple independent sources confirm the same facts? Your website tells a system what you say about yourself. Independent corroboration tells it what others can verify. A business appearing in two sources with inconsistent naming, against a competitor in six with consistent identifiers, will be assigned a lower confidence score even if its content is better. 3. Structural consistency — is the same name, address, description and category identical across every platform that indexes your business? Every inconsistency is a confidence penalty. 4. Technical reliability — does your infrastructure signal trustworthiness? Performance, crawl architecture, schema validity. These determine whether a source is worth including at all. 5. Extraction readiness — is your content structured so AI systems can pull discrete, attributable blocks? Definitions, statistics with attributed sources, named claims, FAQ structures.
The dos and don’ts nobody is telling you
Most AI investment conversations focus on capability. Which model. Which platform. Which tool. How many outputs per month. That is the wrong conversation. The conversation that determines outcomes is: what are you feeding it?
What businesses consistently get wrong
Buying the agent before preparing the property. Investing in AI-powered marketing tools without first ensuring the entity infrastructure those systems depend on is clean, consistent and corroborated. Infrastructure before capability. Every time.
Treating their own website as sufficient evidence. Your website is self-declaration — the equivalent of a CV written by the candidate. AI systems are looking for independent verification of what you claim.
Ignoring Bing. ChatGPT Search and Microsoft Copilot both retrieve from Bing’s index. A site absent from Bing is invisible to both platforms simultaneously — regardless of Google rankings. This is typically an afternoon’s work to fix via Bing Webmaster Tools.
Inconsistent entity information across the web. Name variations, old addresses, different descriptions across platforms. Every inconsistency is a confidence penalty at the validation stage.
No schema markup — or broken schema markup. Schema is how you declare your entity to machines. Organisation schema with correct identifiers and cross-references to authoritative sources is the closest thing to handing a system properly completed deeds. Broken schema actively introduces confusion at the entity validation stage.
What actually works
Build the entity layer before scaling the content layer. Entity schema — Organisation, Person, Service, with explicit identifiers linking to relevant third-party databases — tells every AI system who you are in the format machines parse directly.
Earn independent corroboration systematically. Third-party review platforms. Structured company databases. Editorial mentions in relevant industry publications. One of the highest-leverage, lowest-cost actions is establishing a presence in structured knowledge databases — because these feed the Knowledge Graph layer that many AI systems depend on for entity disambiguation and verification. Not sufficient alone, but frequently the missing piece of the validation layer between retrieval and recommendation.
Treat technical performance as entity signal. Performance scores, clean crawl architecture, valid structured data — these signal that the infrastructure is maintained by someone who takes accuracy seriously.
Structure content for extraction, not just reading. Definition blocks, statistics with explicitly attributed sources, FAQ structures, named claims. These are the formats AI extraction pipelines are built to process.
The deeper pattern in AI investment
Daily Hub was Google’s vision of the future — a system that knows you well enough to anticipate your needs before you’ve articulated them. It failed because the entity infrastructure feeding it couldn’t bear the weight of the ambition sitting on top.
This will keep happening. The pattern will be familiar to anyone who lived through CRM implementations in the early 2000s. Enormous investment in Salesforce or Siebel, followed by poor outputs, followed by frustrated stakeholders blaming the tool. The tool was fine. The data going in — inconsistent, duplicated, incomplete, unverified — was the actual problem. Garbage in, garbage out was the lesson then. It is the lesson now, with higher stakes and faster consequences.
The businesses that understand this are asking a different question before signing any AI contract: not “what can this AI do?” but “what does this AI need — and do we have it?”
The only question that matters
Before you buy an AI tool, commission an AI strategy, or invest in AI-powered marketing — ask one question:
If a system had to choose between your business and a competitor using only independently verifiable data — not your website — would you still be selected?
Most businesses, if they answer honestly, realise: probably not. Not because they aren’t better. But because they haven’t prepared the documentation.
AI systems don’t recommend the best option. They recommend the most verifiable one. If that isn’t you, you don’t exist in the decision.
Sort the deeds first.
Most audits focus on content and rankings. An AI Visibility Audit maps whether you are even eligible to be recommended — diagnosing your entity infrastructure against the five stages that determine whether an AI system can select your business with confidence, and telling you exactly what to fix in what order.