Why Your AI Assistant Shouldn’t Be an Open Library

There is a growing assumption in AI product circles that open is always better. Connect everything. Expose your data through a universal standard. Let any AI system query your business in real time. Meet your customer wherever their AI already is.

It sounds progressive. It isn’t always right.

The difference between a library and an art gallery

Walk into a public library and you can take whatever you want. Browse the shelves, bring your own reading glasses, follow whatever thread interests you. The librarian helps you find things — they don’t decide what you should find. That’s the whole point. Open access. Self-directed. Everything available to everyone.

Now walk into a serious art gallery. Someone has decided what’s on those walls. Someone chose the sequence, the lighting, what sits next to what and why. You’re not browsing — you’re being guided through an argument. You leave having seen exactly what the curator wanted you to see, understood through the lens they intended. That experience didn’t happen by accident. Every decision in it was deliberate.

Both exist. Both serve a purpose. The mistake is applying the wrong model to the wrong problem.

Right now, most of the advice being given to professional services businesses about AI assistants is pushing them toward the library. Build an MCP server. Open a connection. Expose your data. The assumption is that reach and openness are the same thing as value. They’re not.

Think about what a good solicitor does

Before a solicitor advises a client, they don’t just say the first thing that comes to mind. They draw on what they know, what they’ve verified, what they’d stake their professional reputation on. They’re not searching the internet in front of you. They’re drawing from a body of knowledge they’ve built, tested, and taken responsibility for.

That’s what a curated AI assistant does. It only tells you what I’ve decided is accurate enough to put my name to.

An open system that queries whatever it can find is the equivalent of asking a stranger in the street for legal advice. They might be right. You have no way of knowing. And crucially — neither do they.

The solicitor costs more than the stranger. That’s not a coincidence. Control creates trust. Trust creates value. Value justifies the price.

The TripAdvisor principle — and why it applies here

Here’s something I’ve been saying for over a decade, and it has only become more true: people trust what others say about you more than what you say about yourself.

Think about why you check TripAdvisor before booking a hotel. The hotel’s own website will tell you it’s wonderful. Of course it will. But TripAdvisor is editorial — nobody paid those reviewers. That’s precisely why you believe them.

This is the same principle that governs how AI systems decide who to recommend. They don’t trust your website’s claims about how good you are. They look for what others say about you — independently, consistently, across credible sources. The restaurant that only has its own word for it is a risk. The one with 400 consistent reviews on three platforms is a safe recommendation.

An open AI integration that draws from your database gives the AI access to your information. A curated system that gives the AI access to your verified, governed, structured expertise — paired with everything that third parties have said about you — is a fundamentally different proposition.

Strong brands rank and dominate. In the AI era: strong brands rank, get cited, and dominate. The principle hasn’t changed. The surface it plays out on has.

What the gallery model actually gives you

When I built the AI assistant on seostrategy.co.uk, I made a deliberate choice. Not because open integration is technically inferior — it isn’t — but because curation solves a different problem.

You control what the AI knows. The assistant draws from a knowledge base I maintain and update. It doesn’t speculate. It doesn’t fill gaps with plausible-sounding inaccuracies. When it reaches the edge of what it knows, it says so — and offers to help another way. That behaviour is designed. In an open system, you’re hoping for it.

You control the experience. The visitor on your website isn’t “using AI.” They’re using your assistant. Your colours, your brand, your lead capture logic. That distinction matters commercially. An integration that lives inside someone else’s AI tool has no brand presence — the experience belongs to their tool, not to you.

You control the risk. There is a defined perimeter. I know exactly what information the AI can and cannot access. An open connection into your systems requires a zero-trust security posture — carefully scoped permissions for what the AI can read, write, and trigger. Done carelessly, it’s not a data layer. It’s a door you left open.

You have a reason to charge for it. A curated, expertly maintained intelligence system is worth something. A database connection is a utility. Utilities don’t command premium pricing. Curated expertise does. The gallery charges admission. The library is free.

The landscape got bigger — the advice didn’t change

The honest case for open integration — what’s currently called Model Context Protocol, or MCP — is that it becomes genuinely important when AI moves from answering questions to completing tasks. Not “here are three suppliers” but “I’ve checked availability and placed the order.” That shift is coming, and the businesses building toward it now will be better positioned than the ones who wake up to it in 2028.

But there’s a sequencing point that most of the MCP enthusiasm ignores.

You can’t skip straight to Stage 5. You need the foundations — consistent entity identity, external corroboration, explicit positioning, structured content that AI systems can actually extract and cite. A live data layer on top of a recognition problem doesn’t solve the recognition problem. The AI can query your inventory in real time and still not recommend you, because it doesn’t have enough confidence in who you are to name you.

The buyers are still out there. They’re just sitting in ChatGPT and Perplexity asking more complex questions than they used to ask Google. The landscape got bigger. The platforms fragmented. The challenge for any business is covering more bases than ever before. But the underlying truth hasn’t shifted: strong brands, with strong third-party signals, in the right places, with the right structure, rank and get cited and dominate. That’s been true for ten years. It’s more true now.

Gallery first. Foundations solid. Live data layer when the use case demands it — and only then.

When the library card is the right answer

I’m not arguing that open access is always wrong. There are real scenarios where it makes more sense than a curated experience.

If your clients are technical — developers, architects, data teams — who live inside their own AI tools all day, meeting them in those tools is genuinely more useful than asking them to open a browser tab. If you’re building internal tooling for your own team, open access to live operational data is often exactly right. And there’s a hybrid worth considering: the curated public experience for website visitors, paired with a governed API layer for your highest-value clients who want your methodology inside their own workflow. That’s a product tier, not a contradiction.

The mistake is applying the library model to a gallery problem. Most professional services businesses building or buying AI assistants right now are doing exactly that — because open sounds more technically sophisticated. It isn’t. It’s just different.

The question to ask

If you’re deciding how to deploy AI that represents your expertise, start here: who is this for, and what do I want them to experience?

If the answer is potential clients on your website, encountering your expertise in a way that builds confidence and drives enquiries — build the gallery. Curate the collection. Control the door.

The gallery is harder to build. It requires judgment about what goes in the collection, maintenance as your expertise evolves, and a deliberate decision about what the experience should feel like. That difficulty is precisely why it’s worth more.

A pretty website that nobody can find is useless. An AI assistant that tells people whatever it can find is dangerous. The middle ground — a governed, curated, citable knowledge system that builds trust before the first call — is the thing worth building.

For the full picture on where MCP fits in the AI recommendation pipeline: MCP Will Change Which Businesses AI Recommends. For what a curated AI assistant looks like in practice: AI Knowledge Agents.

Related topics:

ai-knowledge-agents ai-seo ai-visibility Entity Seo future-of-seo llm-optimisation search-trends
Sean Mullins

Founder of SEO Strategy Ltd with 20+ years in SEO, web development and digital marketing. Specialising in healthcare IT, legal services and SaaS — from technical audits to AI-assisted development.