The Bigfoot Effect: What the ChatGPT Model Switch Means for AI Visibility in 2026

27,000 responses. 400 prompts. 14 weeks. On 4 March 2026, OpenAI switched ChatGPT’s default model from GPT-4o/5.2 to GPT-5.3 Instant. Visibility metrics in ChatGPT Search collapsed overnight. Resoneo, working with data from Meteoria, tracked what happened with precision. The result is the most detailed evidence yet of how model updates shift AI citation concentration — and it has a name: the Bigfoot Effect.

What the data shows

Before the switch (January through 3 March 2026): average unique domains per response was 19.1. After the switch (4 March onwards): 15.2. A 20.5% drop, sustained across 14 weeks, confirmed across 27,000 tracked responses.

The mechanistically important finding is what did not change: the URLs-per-domain ratio held stable at 1.26 throughout the entire period. ChatGPT did not start crawling pages more shallowly. It visited fewer distinct domains. Same depth per domain. Fewer domains invited.

This is the Bigfoot Effect: the same concentration dynamic Dr Pete at Moz identified in Google’s 2013 Bigfoot update, where dominant domains took up more space on page one while smaller sites were squeezed out. In ChatGPT Search, the same pattern is emerging. Fewer domains capture a larger share of each response’s citation surface.

GPT-5.4 goes further

GPT-5.4 launched the day after the default switch, on 5 March. Analysis by Lily Ray and Chris Long found that GPT-5.4 Thinking runs 10 or more fan-out queries per response and uses explicit site: operators targeting specific trusted domains — Clutch and G2 named explicitly in their analysis. The model is not merely preferring these platforms in a general sense. It is actively searching them by domain name as part of its retrieval process.

This is a structural change in how ChatGPT Search sources its answers. A business absent from Clutch or G2 is absent from that retrieval step entirely. Not ranked lower. Not considered and passed over. Simply not consulted.

The server log confirmation

Independent server log analysis by Jérôme Salomon at Oncrawl corroborated the Resoneo findings from a different angle. Tracking OAI-SearchBot (ChatGPT’s web crawler) across multiple websites, his data showed crawl volume stabilising at a lower level after the model switch. Some pages are no longer crawled at all. Crawl frequency has decreased for pages still being visited. OAI-SearchBot traffic has not compensated for the drop in default model web searches.

The root cause Salomon identifies: with 90% or more of ChatGPT’s weekly users on the Free tier, the default experience is GPT-5.3 Instant — which triggers fewer web searches per query, uses fewer grounding URLs, and produces fewer citations than previous default models or paid tier models. The Free tier majority is dragging down the aggregate citation surface available to businesses relying on ChatGPT Search for visibility.

What this means for strategy

Three conclusions follow directly from the data:

Citation concentration is now structural, not a temporary glitch. The Bigfoot Effect is not a model bug or a temporary calibration. It reflects a deliberate architectural shift toward fewer, more trusted source domains. The domains already inside the citation set will compound their advantage. The domains outside will face a structurally higher entry threshold at each subsequent model update.

Clutch and G2 are not optional for B2B businesses. GPT-5.4’s explicit site: operators targeting these platforms make them functionally mandatory for any B2B business using ChatGPT Search as an AI visibility channel. A verified Clutch profile with reviewed client outcomes is no longer a nice-to-have entity signal. It is a direct prerequisite for appearing in the retrieval step the model runs for B2B category queries.

The Free tier majority problem requires a Perplexity hedge. Because 90%+ of ChatGPT users experience the reduced-citation Free tier model, relying exclusively on ChatGPT Search for AI visibility is a concentration risk. Perplexity, which retrieves from the live web on every query regardless of subscription tier, provides a more direct path to citation for businesses that cannot yet guarantee inclusion in GPT’s concentrated source set. This is not an argument against ChatGPT optimisation. It is an argument for a multi-platform strategy where Perplexity provides the fast feedback loop and ChatGPT provides the long-term compound advantage.

Loren Baker, Founder of Search Engine Journal, put the underlying mechanism clearly in April 2026: “Unlinked brand mentions are now doing work that used to require a backlink. When large language models train on data, they absorb patterns of association. If your brand shows up consistently alongside a topic, you become the answer before anyone asks the question.” The Bigfoot Effect makes this argument urgent: the window to build that associative presence before concentration hardens further is open now. The brands doing this in 2026 are doing what early SEOs did in 2003 — establishing authority before the channel got crowded.

For the full context on ChatGPT-specific optimisation, see How to Rank in ChatGPT. For the cross-platform comparison, see AI Search Platform Comparison. For the leaderboard monitoring approach that anticipates these model changes before they affect your metrics, see LLM Leaderboard: Reading Model Updates as Visibility Signals.

Related topics:

ai-seo ai-visibility chatgpt-seo future-of-seo llm-optimisation search-trends
Sean Mullins

Founder of SEO Strategy Ltd with 20+ years in SEO, web development and digital marketing. Specialising in healthcare IT, legal services and SaaS — from technical audits to AI-assisted development.