Last updated: March 2026
The Argument in One Sentence
Every new acronym in this industry — AEO, GEO, AIO, AAO, LLM optimisation — describes the same discipline operating on a new surface. The practitioners who build on that understanding will compound their advantage across every surface change. The ones who treat each acronym as a new discipline will rebuild from scratch every eighteen months.
This is not a contrarian position. It is an observation about pattern. And right now, with the vocabulary of AI visibility fragmenting in real time and most practitioners either panicking about the change or dismissing it, understanding the pattern is worth more than any tactical playbook.
What Has Never Changed
Strip everything back to first principles. A business exists to serve customers. Customers need to find the business before they can become customers. The act of being findable — of appearing at the moment someone is looking — is the commercial problem that every visibility discipline has always tried to solve.
That problem has been constant since the first Yellow Pages directory. The surface it operates on has not been. Every decade or so, a new discovery system achieves scale — enough users finding enough answers through it that businesses cannot afford to be absent from it. When that happens, a new execution discipline emerges. When enough practitioners are working on that execution, it gets a name. The name accumulates vocabulary, tools, job titles, conferences, and eventually enough search volume for keyword researchers to notice it.
What changes is never the goal. It is always the execution — the specific technical and content practices required to achieve visibility on the new surface. The goal is constant: be the answer when someone looks.
The Surfaces, in Order
The history of digital visibility is a history of new surfaces appearing, achieving scale, and requiring new execution while the underlying problem stays identical.
Web directories and early search (1994–2000)
Before Google, directories like Yahoo and DMOZ were the primary discovery mechanism. Getting listed was manual, categorical, and editorial. The execution was simple: submit your site, get accepted, appear in the right category. The goal was identical to everything that followed: be where people look.
Search engine optimisation (2000–present)
Google achieved dominance. The discovery surface became a ranked list of ten blue links. Execution adapted: keyword research to understand query vocabulary, technical infrastructure to ensure crawlability, content to match query intent, links to signal authority. This is what the industry called SEO — and still calls SEO. The goal was identical to directories: be where people look, ranked above alternatives.
The 3Cs framework I built in 2010 — Code, Content, Contextual Linking — was simply an attempt to name the three execution pillars of that surface clearly enough to explain them to clients. Code (technical foundation), Content (topical relevance), Contextual Linking (authority signals). It was not a new discipline. It was a practical model for the existing one.
Local and mobile search (2008–2016)
Smartphones made location relevant to every search. The Google Maps pack appeared above organic results for local queries. Execution adapted: Google Business Profile optimisation, NAP consistency across directories, review management, local schema markup. A new set of practices, a new sub-discipline (“local SEO”), the same goal: be findable when someone nearby is looking.
Voice and answer engines (2016–2022)
Voice assistants — Alexa, Siri, Google Assistant — returned single spoken answers to voice queries. Featured snippets in Google became the primary target. Answer Engine Optimisation emerged as a practice: structuring content to be extracted as the definitive answer to a specific question. FAQ schema, concise answer paragraphs, question-format headings. The surface changed. The goal did not: be the answer when someone asks.
Generative AI answers (2022–present)
ChatGPT launched in November 2022. Perplexity, Google AI Overviews, Microsoft Copilot followed. The discovery surface became a synthesised paragraph — one answer drawn from multiple sources, not a list of ten links. Execution adapted again: paragraph-level content structure, entity authority, schema that AI systems can parse without reading marketing prose, Bing indexing for ChatGPT and Copilot coverage. Princeton University’s GEO-Bench study in 2024 put statistical weight on what practitioners already knew — specific structural techniques improve AI citation visibility by 30–40%. The discipline got called GEO, AIO, AEO, LLM optimisation, AI SEO. The goal was identical to every previous surface: be the answer when someone looks.
Agentic AI (2025–present)
The newest surface. AI agents do not answer questions — they complete tasks. A procurement manager asks an AI agent to research the top five managed file transfer solutions and compare them on security, compliance and pricing. The agent does not return a list of links. It visits websites, reads documentation, evaluates claims, cross-references sources, and delivers a structured recommendation. The human never sees a SERP. The human may never visit your website. The agent did the evaluation for them.
Execution adapts again: structured data that agents can extract in milliseconds, page speed fast enough for agent timeouts (typically one to five seconds), entity architecture coherent enough for cross-reference verification, pricing and service descriptions machine-readable rather than buried in prose. This is what AAO — Assistive Agent Optimisation — describes. A new surface, new execution, the same goal: be the answer when someone (or something acting on someone’s behalf) looks.
What Actually Changed Each Time
Looking across the history, the pattern of what changes is consistent.
The retrieval mechanism. Directories were editorial. Search engines were algorithmic. Voice assistants were single-answer extraction. AI systems are synthesised multi-source generation. AI agents are autonomous evaluation and action. Each retrieval mechanism has different technical requirements — hence different execution practices. This is the real substance of each new acronym: a specification for a new retrieval mechanism.
The competition for the answer slot. Ten blue links gave ten businesses the answer slot per query. A featured snippet gave one. An AI Overview typically cites three to five sources. An AI agent may recommend one. As each new surface has emerged, the answer slot has narrowed — which is why the commercial stakes of getting it right have increased with each transition. Seer Interactive found that AI-referred traffic converts at 14.2% compared to 2.8% for traditional organic. The narrower the slot, the higher the intent of whoever comes through it.
The technical prerequisites. Each surface has introduced new technical requirements: structured data for rich results, schema markup for AI systems, entity consistency for knowledge graphs, server-side rendering for AI crawlers that do not execute JavaScript. These are genuine new skills. They are not new goals.
The vocabulary. SEO. Local SEO. Voice search optimisation. AEO. GEO. AIO. AAO. Each transition generates new vocabulary. The vocabulary is useful for communicating precisely about a specific execution context. It is not useful as a replacement for understanding the underlying discipline. Practitioners who build their understanding on vocabulary will always be one acronym behind.
What Has Never Changed
The fundamentals have been identical across every surface transition.
Authority matters. Whether the signal is PageRank, domain authority, knowledge graph entity confidence, or LLM training data presence, every discovery system has always weighted sources it considers credible above sources it does not. The mechanism for establishing credibility has changed. The requirement for it has not.
Relevance matters. Whether it is keyword matching, semantic query understanding, or paragraph-level extractability, every discovery system has always tried to surface the most relevant answer to the specific question being asked. What counts as relevance has become more sophisticated. The requirement has been constant.
Technical accessibility matters. Whether it is crawlability, mobile-first compliance, JavaScript rendering, page speed for AI crawlers, or structured data completeness, every discovery system has always required content to be technically accessible before it can be visible. The specific requirements have changed. The principle has not.
Consistency across platforms matters. Whether it is NAP consistency for local SEO, entity consistency for knowledge graphs, or cross-platform information accuracy for AI agent verification, discovery systems have always cross-referenced claims. The sophistication of cross-referencing has increased. The requirement for consistency has been present since at least 2008.
The Algorithmic Trinity: Why the Fundamentals Are Now Three Surfaces Simultaneously
Here is where the current moment differs from previous surface transitions in an important structural way — and where understanding the continuity gives you a specific tactical advantage.
Every previous surface transition was largely sequential. You optimised for directories, then optimised for search engines, then added local SEO, then added voice. Each new surface had its own requirements, but you could address them one at a time. The new surface was additive.
The AI transition is not sequential. Every AI discovery system — Google AI Overviews, Perplexity, ChatGPT Search, Microsoft Copilot, AI agents — runs simultaneously on three components that Jason Barnard of Kalicube calls the Algorithmic Trinity: large language models, knowledge graphs, and traditional search. ChatGPT is LLM-heavy. Google weights its knowledge graph. Perplexity weights its own retrieval index. But all three components are present in every platform, all the time.
This means a strategy that addresses only one component produces platform-inconsistent results. Strong traditional SEO (the search component) without entity authority (the knowledge graph component) gives you Google organic rankings and poor AI citation. Strong content structure (the LLM component) without Bing indexing (the search component) gives you Google AI Overview citations and zero ChatGPT or Copilot presence. The surfaces are now layered, not sequential.
The AI Discovery Stack maps this in practical terms — five layers from entity understanding through to agentic action, each corresponding to a different failure mode and a different fix. The Algorithmic Trinity explains why you need to work across all five simultaneously rather than in sequence.
The Compounding Advantage of Understanding Continuity
Here is the practical implication that matters most for any business or practitioner reading this.
If you understand that visibility is a single discipline operating on multiple surfaces, you build a foundation that works across all of them. Entity architecture — clear Organisation and Person schema, cross-platform consistency, knowledge graph presence — serves every AI system because every AI system uses the knowledge graph component to evaluate credibility. Technical accessibility — fast pages, server-rendered content, correct crawl permissions — serves traditional search and every AI crawler. Content structure — standalone answer openings, explicit definitions, attributed statistics — serves AI selection on every platform because the selection mechanism is fundamentally similar across all of them.
Every piece of foundation work compounds across all current and future surfaces. When the next surface transition arrives — and it will arrive, probably faster than the last one — a business with strong entity architecture, a technically accessible site, and well-structured content will adapt by adding new execution on an existing foundation. A business that built for one surface specifically will start from scratch again.
This is the compounding advantage that SEO Strategy has always tried to build for clients. The Dog Walker Portsmouth site that has ranked number one since 2009 is not a curiosity. It is a proof of concept: a site built on correct fundamentals that has survived every algorithm update, every interface change, and every new surface transition because the foundation was right. Azure Outdoor Living, which scaled to seven-figure turnover through organic search, did so because the visibility systems we built compound — each year’s work reinforcing the previous year’s rather than replacing it.
The businesses now dismissing AI visibility as a fad and doing nothing are making a mistake. The businesses treating it as a completely new discipline — throwing out their existing SEO investment and rebuilding from scratch around AI-specific tools — are making an equally costly one. The right response is to understand what the new surfaces require in addition to what you already have, audit which layers of the AI Discovery Stack are failing for your specific site, and address them in sequence from the foundation up.
The Thread Forward: What 2030 Looks Like From Here
AI agents are already replacing significant portions of the research and comparison phases of the buying journey. A procurement manager asking an agent to evaluate managed file transfer vendors is not a 2030 scenario. It is happening now. The agent visits your site, reads your documentation, cross-references your claims, compares you against alternatives, and delivers a shortlist recommendation — often before a human ever sees your URL.
By 2030, this will be the dominant discovery mechanism for high-consideration B2B purchases. And by 2030, a significantly more capable AI — with access to richer knowledge graphs, better cross-reference verification, and longer context windows — will still need to retrieve certain things from external sources. It will still need specific, attributed facts that it cannot safely fabricate: case study outcomes tied to named clients, original frameworks with identified originators, practitioner insights that come from doing the work rather than synthesising training data. The sites that will perform best in that environment are the ones building those assets now.
Visibility has always been the goal. The landscape is just bigger now — and getting bigger faster than at any previous point in the discipline’s history. But the thread runs unbroken from the first directory submission to the last agentic recommendation. Understand the thread, and every new surface becomes an expansion of what you already know how to do.