All posts
webp/avif-imageslazy-loadingcore-web-vitals

How I Optimized a Website From 40 to 95 PageSpeed Score

How I built optimizing a real production website for performance, accessibility, and SEO — architecture decisions, key challenges, and what I'd do differently.

SR

Suhail Roushan

May 12, 2026

·
6 min read

I optimized a production website from a 40 to a 95 PageSpeed score by focusing on Core Web Vitals, image delivery, and critical rendering. The project involved a real client site built with a traditional LAMP stack, where poor performance was directly impacting user engagement and SEO rankings. My goal was to implement a set of targeted, high-impact fixes without a full platform migration.

Architecture Overview

The original architecture was a monolithic PHP application (WordPress) serving all assets from a single server. The main performance bottlenecks were render-blocking resources, unoptimized images, and no caching strategy beyond basic browser hints. The optimized architecture introduces a static asset pipeline and a CDN layer.

The flow now separates dynamic content from static delivery. PHP generates the HTML, but all CSS, JavaScript, fonts, and images are optimized, versioned, and served via a CDN. I used a combination of build-time optimizations (like image conversion) and runtime techniques (like lazy loading). The diagram below shows the critical path for a page load.

graph TD
    A[User Request] --> B[CDN Edge];
    B -- Cache Miss --> C[Origin Server/PHP];
    C --> D[HTML Response];
    B -- Cache Hit --> D;
    D --> E[Browser Parses HTML];
    E --> F[Load Critical CSS Inline];
    E --> G[Defer Non-Critical JS];
    E --> H[Load Lazy Images/WebP];
    F & G & H --> I[Page Interactive];

Key Technical Decisions

The first major decision was to move image optimization to a build process. Manually converting hundreds of images to WebP and AVIF formats was not feasible. I wrote a Node.js script that uses the sharp library to process an uploads directory, generate optimized versions, and create a manifest file for the PHP application to reference.

const sharp = require('sharp');
const fs = require('fs/promises');
const path = require('path');

async function optimizeImage(srcPath, destDir, filename) {
  const baseName = path.parse(filename).name;

  // Generate WebP
  await sharp(srcPath)
    .webp({ quality: 80 })
    .toFile(path.join(destDir, `${baseName}.webp`));

  // Generate AVIF if supported by sharp
  await sharp(srcPath)
    .avif({ quality: 70 })
    .toFile(path.join(destDir, `${baseName}.avif`));

  // Log entry for manifest
  return {
    original: filename,
    webp: `${baseName}.webp`,
    avif: `${baseName}.avif`
  };
}

The second decision was to implement a granular lazy loading strategy. Native loading="lazy" is good, but for hero images or content likely to be in the initial viewport, it can hurt LCP. I used the Intersection Observer API for more control, only applying lazy loading to images below the fold.

const lazyImages = document.querySelectorAll('img[data-src]');

const imageObserver = new IntersectionObserver((entries) => {
  entries.forEach(entry => {
    if (entry.isIntersecting) {
      const img = entry.target as HTMLImageElement;
      img.src = img.dataset.src!;
      img.removeAttribute('data-src');
      imageObserver.unobserve(img);
    }
  });
}, {
  rootMargin: '100px' // Start loading 100px before the image enters viewport
});

lazyImages.forEach(img => imageObserver.observe(img));

What Broke and How I Fixed It

The first breakage was with AVIF images in Safari. While Chrome handled the new <picture> source order perfectly, Safari (which didn't support AVIF at the time) would fail to load any image if the AVIF source was first. The site showed broken images for a significant portion of users. The fix was to implement proper feature detection and fallback order in the PHP template.

The original markup was:

<picture>
  <source srcset="image.avif" type="image/avif">
  <source srcset="image.webp" type="image/webp">
  <img src="image.jpg" alt="...">
</picture>

I fixed it by reordering sources and adding a JavaScript-based detection for broader compatibility, defaulting to the WebP source second and JPEG as the ultimate fallback.

The second issue was Cumulative Layout Shift (CLS) caused by fonts. The site used a custom web font that loaded after the system font, causing a visible text reflow. I fixed this by using font-display: swap in the CSS and adding a font-display descriptor to the @font-face rule. More importantly, I used the CSS font-display: optional for the primary heading font, which tells the browser to only use the custom font if it's available in the initial render cycle, otherwise it permanently falls back to the system font. This eliminated the layout shift entirely.

How to Build Something Similar

Start with measurement. Run Lighthouse in Chrome DevTools on a representative page and note the three Core Web Vitals: LCP, FID (or INP), and CLS. Your priority order should be: 1) Optimize LCP (images, server response, render-blocking CSS), 2) Eliminate CLS (size your images and ads, reserve space for fonts), 3) Improve INP (reduce JavaScript execution time, break up long tasks).

Create a static asset pipeline. Even if you're on a monolithic platform, you can process images and bundle CSS/JS locally and upload the optimized files. Use the sharp CLI or library for images. For JavaScript, use a bundler like esbuild to minify and combine files, and load them with defer. Inline critical CSS needed for the above-the-fold content directly in the <head>.

Finally, implement a caching strategy. Set aggressive cache headers (Cache-Control: public, max-age=31536000, immutable) for your static assets and serve them from a CDN. For the PHP site, I used a plugin to generate these headers. The key is to version your asset filenames (e.g., style.abc123.css) so you can cache them forever.

Would I Build It the Same Way Again?

For a WordPress or similar CMS site, yes, this incremental approach is effective. A full rewrite in a modern framework like Next.js would yield a higher performance ceiling, but the time and cost are often unjustified for an existing business site. The 55-point gain here came from focused fixes, not new technology.

The one thing I would change is the order of implementation. I tackled images first, which improved LCP but introduced the AVIF compatibility issue. Next time, I'd start with the fundamentals: enabling compression on the server (like Brotli), deferring JavaScript, and inlining critical CSS. These provide a solid baseline and are less likely to cause browser-specific bugs. Then I'd layer on image optimization and advanced caching.

The single most important thing to know before starting a performance overhaul is that your metrics will vary wildly. A 95 on your local machine might be a 75 on a 3G connection from another country. Always test using Lighthouse's simulated throttling and real-world conditions via tools like PageSpeed Insights or WebPageTest. Performance is a feature, not a one-time score.

Related posts

Written by Suhail Roushan — Full-stack developer. More posts on AI, Next.js, and building products at suhailroushan.com/blog.

Get in touch