Before we talk about your site’s performance, here are the numbers for ours. Tested 21 February 2026 via Google PageSpeed Insights, seostrategy.co.uk scores: Desktop — Performance 99, Accessibility 93, Best Practices 100, SEO 100. Mobile — Performance 91, Accessibility 93, Best Practices 100, SEO 100. FCP 0.7s, LCP 0.7s, TBT 40ms, CLS 0.004 on desktop. That’s on a WordPress site running a custom theme, dark mode toggle, AI chat assistant plugin, mega menu with search, Complianz cookie consent, and programmatic schema compilation across seven entity types. No page builder. No GA4. No Tag Manager.
We don’t lead with those numbers to show off. We lead with them because this is a credibility test. If you’re hiring someone to improve your site’s performance, their own site should pass the same standards they’re asking you to meet. We’ve seen agencies selling “structured data integration” and “AI optimisation” whose own sites score below 50 on performance, below 85 on accessibility, and have a single valid schema item — breadcrumbs — despite literally advertising structured data as a service. That gap between what agencies sell and what they practise tells you everything about whether their advice is theoretical or tested.
Core Web Vitals are the measurable foundation of how your website performs for real users. They’re a confirmed Google ranking signal, a direct driver of conversion rates, and — increasingly — a factor in whether AI platforms can access and cite your content quickly enough to include it in generated answers. If your site is slow, unstable, or unresponsive, you’re paying for it in rankings, revenue, and AI visibility simultaneously.
What Are Core Web Vitals?
Core Web Vitals are three specific metrics Google uses to measure the real-world user experience of your pages. They replaced a broader, vaguer set of “page experience” signals with concrete, measurable thresholds that every site owner can test and track.
Largest Contentful Paint (LCP) measures loading performance — specifically, how long it takes for the largest visible content element (usually a hero image or main heading) to render on screen. The threshold: under 2.5 seconds is “good,” 2.5–4 seconds “needs improvement,” over 4 seconds is “poor.” Our desktop LCP is 0.7 seconds.
Interaction to Next Paint (INP) measures responsiveness — how quickly the page reacts when a user clicks, taps, or types. INP replaced First Input Delay (FID) as the official interactivity metric in March 2024 because FID only measured the first interaction, while INP captures the worst-case responsiveness across an entire page visit. The threshold: under 200 milliseconds is “good,” 200–500ms “needs improvement,” over 500ms is “poor.”
Cumulative Layout Shift (CLS) measures visual stability — how much the page layout moves around unexpectedly while loading. If you’ve ever tried to tap a button on a mobile page only to have it jump as an ad loads above it, that’s a CLS problem. The threshold: under 0.1 is “good,” 0.1–0.25 “needs improvement,” over 0.25 is “poor.” Our CLS is 0.004 — essentially zero layout shift.
All three metrics must pass at the 75th percentile of real user data for a page to earn a “good” Core Web Vitals assessment. That means 75% of your actual visitors need to experience good performance — not just a single test run on your office broadband.
Lab Data vs Field Data: The Critical Distinction
This is the single most misunderstood aspect of Core Web Vitals, and it’s where most businesses get a false sense of security — or unnecessary panic.
Lab data comes from tools like Lighthouse, PageSpeed Insights (simulated mode), and WebPageTest. These tools test your page in controlled conditions — a specific device profile, a specific connection speed, a fresh browser cache. Lab data is excellent for diagnosis. It tells you what’s potentially slowing your site down and gives you specific, actionable recommendations. But it’s a simulation, not reality.
Field data comes from the Chrome User Experience Report (CrUX) — real performance data collected from actual Chrome users visiting your site over a rolling 28-day window. This is what Google uses for ranking. A site can score 95 in lab tests and fail in the field because real users on slow 4G connections, older Android devices, and congested networks have a fundamentally different experience than a simulated Lighthouse test on a fast machine.
The reverse is also true. A site can look mediocre in lab tests but pass in the field if most of its real visitors are on fast connections. This is why we use both: lab data to diagnose and fix specific issues, field data to verify that real users are actually experiencing the improvements. Google Search Console’s Core Web Vitals report shows field data, grouped by “Good,” “Needs Improvement,” and “Poor” URLs — that’s your authoritative source for ranking impact.
Our site currently shows “No Data” for field metrics in CrUX because it’s relatively new and hasn’t accumulated enough Chrome user sessions to generate a report. That’s normal for newer or lower-traffic sites. The lab scores demonstrate the engineering quality; field data will validate the real-world experience as traffic builds. If your site is established and shows field data, that’s the number that matters for rankings. If it doesn’t, lab data is your best proxy — and the scores we’ve achieved show what’s possible with disciplined engineering.
The Core Web Vitals to AI Visibility Pipeline
This is the strategic angle that makes CWV optimisation more commercially important in 2026 than it was even two years ago. Core Web Vitals don’t just affect your Google rankings — they feed a pipeline that determines whether AI platforms cite your content.
The data is clear: according to Ahrefs’ study of 1.9 million AI Overview citations, 76% of URLs cited in Google’s AI Overviews also rank in the top 10 organic results. The median ranking position of cited pages is 3. That’s not a coincidence — AI platforms rely heavily on search engine indexes through retrieval-augmented generation (RAG), and pages with strong organic rankings are disproportionately selected as citation sources.
Core Web Vitals are part of what keeps you in those top organic positions. Not the dominant factor — content relevance, authority, and topical depth matter more — but a confirmed ranking signal that provides measurable advantage in competitive queries. When two pages are equally relevant and authoritative, the one with better performance wins the position. And that position determines citation eligibility.
The pipeline works like this: strong CWV contributes to strong organic rankings, which drive citation eligibility in AI Overviews, ChatGPT, Perplexity and other AI platforms. It’s not three separate disciplines — it’s one continuous chain. A slow, unstable site doesn’t just lose organic positions; it becomes invisible in the AI answers that are increasingly where your potential clients discover solutions to their problems.
There’s also a direct technical connection. AI crawlers — GPTBot, ClaudeBot, PerplexityBot — have tight timeouts when retrieving content, typically one to five seconds. If your pages take too long to serve, these crawlers may abandon the request entirely. A fast site isn’t just better for Google — it’s accessible to the AI systems that need to read your content to cite it. Our site serves pages in under a second. That’s not just a vanity metric — it’s accessibility for every discovery channel that matters.
This connection between traditional SEO fundamentals and AI visibility is the same theme we explore on our SEO vs GEO comparison — roughly 70% of what drives GEO success is the same foundational work that drives SEO. Core Web Vitals sit squarely in that shared foundation.
The Usual Suspects: What’s Actually Slowing Your Site Down
After auditing dozens of client sites, the same culprits appear with depressing regularity. Most performance problems aren’t exotic or mysterious — they’re entirely predictable, and they’re entirely fixable.
Plugin bloat. WordPress sites running 30–60 active plugins are common, and most of them are loading CSS and JavaScript on every page regardless of whether that page uses the plugin. A contact form plugin loading its scripts on your blog posts. A slider plugin loading its library on pages without sliders. A social sharing plugin injecting its assets everywhere. Each adds 20–100KB of unnecessary payload, and the cumulative effect can easily add 500KB+ of unused code to every page load.
Page builders. Elementor, Divi, WPBakery and similar tools make website building accessible to non-developers, but the performance cost is substantial. A typical page builder adds 300–800KB of framework JavaScript and CSS that loads on every page, regardless of which features that page uses. The convenience comes at a measurable price in LCP and TBT. We’ve seen page builder sites struggle to break 50 on mobile performance even after other optimisations, because the builder framework itself is the bottleneck.
Unoptimised images. Full-size PNG screenshots at 2MB each. JPEG hero banners without compression. No lazy loading, so images below the fold load before the user can even see them. No WebP or AVIF formats, which deliver the same visual quality at 25–50% of the file size. No explicit width and height attributes, which causes CLS as images pop into the layout after the text has already rendered. Images are usually the single biggest quick win in CWV optimisation — and the most commonly neglected.
Database bloat. WordPress stores every autosave and revision by default. After a few years, it’s not unusual to find 10,000+ revisions in the database, alongside thousands of expired transients, orphaned metadata, and spam comments. A bloated database doesn’t directly affect front-end load times if you’re using caching, but it slows admin performance, increases backup sizes, and can cause problems when cache is cold or regenerating. Limiting revisions, cleaning transients, and optimising database tables are maintenance tasks that pay dividends.
Caching misconfiguration. Caching is supposed to solve performance problems, but misconfigured caching creates new ones. We see sites running two or three caching plugins simultaneously, each generating conflicting rules. Sites where object caching is enabled but the hosting environment doesn’t support it. Sites where page caching excludes logged-in users but the cookie consent plugin makes every visitor look “logged in” to the cache layer, effectively disabling caching for everyone. A properly configured single caching solution beats three conflicting ones every time.
Web fonts. Loading Google Fonts with six weights and two styles when the site uses regular and bold. Each additional font weight is an extra network request and 20–40KB of data. Even worse: loading them from Google’s CDN adds a DNS lookup and connection to fonts.googleapis.com before the font file can even start downloading. Self-hosting your fonts with only the weights you actually use, using font-display: swap, and preloading the primary font file can shave hundreds of milliseconds off FCP and LCP.
Cookie consent banners. Ironic, given they exist to protect user privacy: many cookie consent implementations load before the main content, adding to LCP and creating CLS as the banner appears and shifts the page layout. Complianz, the solution we use, handles this better than most — but even well-built consent tools need careful configuration to minimise performance impact.
Chat widgets. Third-party live chat scripts loaded synchronously in the header can block rendering entirely. Even when loaded asynchronously, they typically add 200–500KB of JavaScript. If you need live chat, load it on interaction (click a “Chat” button to initialise the widget) rather than on every page load for every visitor.
Google’s Own Products: The Ironic CWV Killers
This is the section no agency will write, because every agency runs GA4 on every client site. But it’s honest, so here it is.
Google Analytics 4 adds approximately 80–120KB of JavaScript to your site. Google Tag Manager adds its own container script plus whatever tags you load through it — often another 200–400KB when you include marketing pixels, conversion tracking, and remarketing tags. Together, GA4 and GTM are frequently the single largest JavaScript payload on client sites we audit, and they directly impact Total Blocking Time, which feeds into the performance score.
It’s genuinely absurd. Google defines the Core Web Vitals metrics. Google uses those metrics as a ranking signal. And Google’s own analytics products are among the most common causes of failing those metrics. The same company is setting the exam, grading the results, and selling you the textbook that weighs down your bag on the way in.
Google reCAPTCHA v3, which runs on every page load to build a behaviour score, adds another 150–200KB of JavaScript. Google Fonts loaded from Google’s CDN add external DNS lookups and connection overhead. Each of these is a Google product that measurably degrades performance against Google’s own benchmarks.
Our site deliberately doesn’t run GA4. That’s a real decision with real trade-offs — we lose granular user behaviour data, audience demographics, and the integration with Google Ads that most agencies rely on. Instead, we use Google Search Console for search performance data, server-side analytics for traffic patterns, and conversion tracking through form submissions. For our business, the trade-off is worth it. For clients with large ad spend and complex attribution requirements, GA4 may be necessary — but even then, it should be loaded asynchronously, deferred until after initial page render, and stripped of any tags that aren’t actively informing decisions.
The honest advice is this: audit every third-party script on your site and ask two questions. Is this script actively informing a business decision? And is it configured to load with minimal performance impact? If the answer to either question is no, remove it or fix it. That includes Google’s own products.
WordPress-Specific CWV Optimisation
WordPress powers roughly 40% of the web, and it’s perfectly capable of excellent Core Web Vitals scores — our site proves that. But achieving those scores on WordPress requires deliberate engineering decisions, not just installing a caching plugin and hoping for the best.
Theme selection matters enormously. A lightweight theme with clean code, minimal dependencies and no framework overhead starts at a fundamentally different baseline than a multipurpose theme designed to do everything for everyone. Our custom theme loads zero external frameworks. No jQuery (unless a plugin requires it), no Bootstrap, no Font Awesome icon library. Every line of CSS and JavaScript exists because a specific feature needs it. That’s the level of discipline required to score 99 on desktop with a full feature set including dark mode, mega menu, and an AI chat assistant.
Plugin audit methodology. Deactivate every plugin. Measure the baseline performance. Reactivate plugins one by one, measuring after each activation. You’ll quickly identify which plugins add meaningful overhead and which are negligible. The ones that add significant weight need to be evaluated: is this plugin essential? Can it be replaced with a lighter alternative? Can its assets be conditionally loaded only on pages where it’s actually used? This process typically identifies three to five plugins that account for 80% of the performance impact.
Image pipeline. Convert to WebP or AVIF formats. Implement lazy loading for below-the-fold images but ensure the LCP image is eagerly loaded and preloaded in the head. Set explicit width and height attributes on every image element to prevent CLS. Use responsive srcset attributes to serve appropriately sized images for different viewport widths. WordPress 6.x handles much of this natively with the performance improvements in recent core releases, but many themes and plugins override the defaults.
Database maintenance. Limit post revisions (we use define('WP_POST_REVISIONS', 5); in wp-config.php). Schedule regular cleanup of transients, spam comments, and orphaned metadata. Optimise database tables monthly. For high-traffic sites, implement object caching with Redis or Memcached to reduce database queries per page load.
Caching strategy. One caching solution, configured properly, is all you need. Page caching for anonymous visitors. Browser caching headers for static assets. If your host provides server-level caching (Cloudflare, WP Engine, Kinsta), use that rather than adding a plugin layer. The goal is to serve cached HTML in under 200ms Time to First Byte — and that’s achievable on most quality WordPress hosts without exotic configuration.
For a deeper dive into WordPress-specific optimisation beyond performance, including content architecture, plugin selection, and theme development, see our WordPress SEO guide.
Our Approach: Diagnose, Fix, Monitor
Core Web Vitals optimisation isn’t a one-off project — it’s a diagnostic process followed by targeted fixes and ongoing monitoring. Here’s how we approach it.
Diagnosis. We run a comprehensive performance audit using both lab tools (PageSpeed Insights, Lighthouse, WebPageTest) and field data (CrUX, Search Console experience report). We test your highest-traffic pages, your key conversion pages, and your homepage — because CWV is assessed per-URL, not site-wide. Each page gets a specific breakdown: what’s causing the LCP delay, what’s driving the INP score, what’s triggering layout shifts. No generic reports — specific issues with specific causes.
Prioritised fixes. Not all performance issues have equal impact. We prioritise by commercial value: fixing the contact page that converts at 3% matters more than fixing a blog post from 2019. We estimate the expected improvement for each fix — “removing this render-blocking script should reduce LCP by approximately 0.8 seconds on mobile” — so you can see the logic behind the priority order. Fixes range from quick wins (image compression, removing unused plugins) to structural changes (replacing a page builder section with clean HTML, migrating from Google Fonts to self-hosted fonts).
Pre/post measurement. We document performance scores before every change and after. This creates an evidence trail that proves what worked and by how much. It also catches regressions — sometimes a fix in one area causes an unexpected problem in another (installing a new caching plugin that conflicts with the CDN, for example). Measurement disciplines the process.
Integration with broader strategy. CWV optimisation doesn’t exist in isolation. It connects directly to conversion rate optimisation — every second of improvement measurably increases conversions. Research consistently shows that pages loading in one second convert at roughly three times the rate of pages loading in five seconds. It connects to technical SEO — crawl efficiency, rendering, server response times. And it connects to AI visibility — the speed and accessibility that help AI crawlers retrieve and parse your content for citation. A comprehensive search visibility audit assesses all of these dimensions together.
Ongoing monitoring. CWV scores can regress when plugins update, content changes, or new third-party scripts get added by a marketing team. We set up alerting through Search Console and recommend monthly performance reviews as part of any ongoing SEO engagement. The goal isn’t a one-time score — it’s sustained performance that compounds into ranking stability and conversion consistency.