There is a user agent string you can look for in your server logs right now: Google-Agent. Google introduced it to identify requests made by AI agents acting on behalf of users — browsing pages, evaluating content, taking actions such as submitting forms. It was reported by Search Engine Land on 25 March 2026. It is not a future development. It is deployed infrastructure.
This matters because the agentic evaluation argument — the idea that AI agents will visit your site on behalf of users and evaluate whether your business is a credible candidate before the user sees anything — has been the most frequently contested part of frameworks like CITATE and OARCAS. Not because the logic was wrong, but because it was speculative. The infrastructure was coming. Now it has a name.
What Google-Agent actually is
Google-Agent is a user agent that identifies when Google’s AI systems are acting agentically — not crawling for index purposes (that is Googlebot) but operating as an agent on a user’s behalf. Project Mariner, Google’s experimental browsing agent, uses it. As agentic features expand across Google’s products, this is the user agent those systems identify themselves with when they visit your pages.
The practical consequence: an AI agent accessing your site to evaluate whether you are a credible vendor, to read your service pages, to extract your pricing or contact details, or to assess whether your content is trustworthy enough to recommend — is now an event you can observe. If you see Google-Agent in your server logs, an AI agent has visited your site on behalf of a user.
What it means for page structure
A human reader navigates a page. They scroll, skim, follow links, return to headings. An AI agent evaluates a page differently. It extracts from the beginning of sections. It looks for explicit definitions, named entities, standalone answers that make sense without surrounding context. It checks whether the page declares who produced the content and whether there are attributable claims that can be used in a recommendation.
This is not a new observation — it is the foundation of CITATE. What Google-Agent confirms is that this evaluation is happening now, at scale, by identifiable infrastructure. A page written only for human readers — with context-dependent openings, unnamed entities, unsourced statistics, and vague positioning language — is not ready for this layer of evaluation. The structural requirements that CITATE defines are the structural requirements that make a page machine-evaluable.
What it means for the 20% argument
The SEJ/DAC framework published this week makes the case that AI citation optimisation is the right lever for approximately 20% of brands — those already winning in traditional search, with clean technical foundations and established authority. For the 80%, foundation work comes first.
Google-Agent changes the urgency calculation for that 20%. If your pages are already ranking, your content is already being visited by AI agents evaluating it on behalf of users. The question is not whether this evaluation is happening — it is whether your pages pass it. A page that ranks at position 3 but fails CITATE structure criteria is being retrieved, evaluated, and likely passed over. The programmatic alternative — 59,000 daily impressions at average position 8 and 2.2% CTR — demonstrates exactly what Stage 1 optimisation produces: retrieval pool membership, not citation. Google-Agent is evaluating for the second outcome, not the first.
The robots.txt check you need to do right now
Before anything else: check that Google-Agent is not being blocked in your robots.txt. If your robots.txt uses a catch-all disallow rule for unknown user agents, or if a security-conscious setup blocks any bot it does not explicitly recognise, Google-Agent will be blocked and the agentic evaluation layer cannot reach your pages at all. It is a Layer 1 failure in the AI Discovery Stack — the agent cannot get to the door, let alone evaluate what is behind it.
The user agent string is Google-Agent. To explicitly allow it, add the following to your robots.txt:
User-agent: Google-Agent Allow: /
If your robots.txt already has User-agent: * followed by Allow: / with no specific disallow rules, you are likely fine — Google-Agent will be covered by the wildcard. The risk is in setups that allow only named crawlers explicitly, or that use blanket disallow rules for unrecognised agents. Check the file directly. This takes two minutes and the consequence of getting it wrong is that you are invisible to the agentic evaluation layer regardless of how well your pages are structured.
What to check in your server logs
If you run Apache or Nginx, filter your access logs for the string “Google-Agent”. In Nginx: grep "Google-Agent" /var/log/nginx/access.log. In Apache: grep "Google-Agent" /var/log/apache2/access.log. In Cloudflare or similar CDN dashboards, filter bot traffic by user agent string. What you are looking for is not volume — a single visit from Google-Agent is the relevant signal. It confirms the evaluation layer is active on your domain.
The infrastructure is accelerating
Google-Agent arrived alongside TurboQuant — a vector indexing breakthrough announced by Google in March 2026 that reduces the time required to build a vector database index to virtually zero. The original research paper was published in April 2025, giving Google a year to integrate it before the March 2026 core update. Analyst Marie Haynes has identified TurboQuant as a likely factor behind that update — if correct, Google can now run semantic matching across hundreds of results rather than the top 20–30 it could previously afford to process.
The consequence for agentic evaluation is direct. At that scale, traditional ranking signals matter less. Semantic authority, entity recognition, and the machine-evaluable structure that Google-Agent is specifically designed to assess — these become more important. Google-Agent and TurboQuant are not separate stories. They are two parts of the same shift: AI-first evaluation infrastructure that rewards entities with clear identity, extractable content, and independent corroboration. The floor-by-floor model this implies is documented at MCP vs WebMCP: And Why Neither Matters If Your Building Has No Floors.
For a full diagnostic of how your pages perform against the evaluation criteria AI agents use, the AI Recommendation Readiness Diagnostic covers all five layers. For the page-level standard that determines whether content passes or fails machine evaluation, see the CITATE framework. For the agentic SEO context this sits within, see the Agentic SEO guide.