By Adam Hultman
Lighthouse scores are often the go-to metric for evaluating a website’s performance. While they provide a great starting point, truly optimizing your frontend goes beyond what Lighthouse can measure. Real-world performance involves understanding how browsers render content, optimizing bundle size, and ensuring that your app is as fast as possible for users on all devices and networks. In this post, we'll look into techniques like bundle analysis, optimizing hydration, critical CSS, and code-splitting to make your apps snappier and more responsive.
Lighthouse is a fantastic tool that gives you a quick overview of your web app’s performance. It scores your app on aspects like First Contentful Paint (FCP), Time to Interactive (TTI), and Cumulative Layout Shift (CLS). But hitting a perfect score doesn’t necessarily mean your app is truly optimized for all real-world scenarios.
Real users come with different devices, network conditions, and expectations. A site that scores 100 in Lighthouse might still feel slow on an older mobile device or when dealing with a flaky network connection. This is where going beyond Lighthouse becomes crucial.
One of the key factors that affect your app's load time is the bundle size. The larger your JavaScript bundle, the more time the browser takes to download, parse, and execute it. This is especially crucial for mobile users who may have slower network connections.
To analyze your bundle, install the Webpack Bundle Analyzer:
Run the following in your terminal:
1
npm install --save-dev webpack-bundle-analyzer
Next, add it to your Webpack configuration:
1
2
3
4
5
6
7
const { BundleAnalyzerPlugin } = require('webpack-bundle-analyzer');
module.exports = {
plugins: [
new BundleAnalyzerPlugin()
]
};
Once you run your build process, the Bundle Analyzer will provide a visual representation of your bundle size. This helps you identify large dependencies that might be slowing down your app.
Look for opportunities to reduce the size of your bundle by tree-shaking unused code, using dynamic imports for rarely used components, and replacing large libraries with smaller alternatives. For example, swapping out lodash
with lodash-es
can reduce the bundle size if you’re only using a few functions.
If you’re using Server-Side Rendering (SSR) with frameworks like Next.js, the hydration phase can significantly impact user experience. Hydration is the process where the server-rendered HTML becomes interactive in the browser, but if it takes too long, users might see content without being able to interact with it.
To optimize hydration:
1
2
3
4
5
import dynamic from 'next/dynamic';
const HeavyComponent = dynamic(() => import('../components/HeavyComponent'), {
ssr: false,
});
This loads HeavyComponent
only on the client side, reducing the initial load for SSR.
defer
attribute.1
<script src="non-critical.js" defer></script>
By optimizing the hydration process, you can reduce the Time to Interactive (TTI) and ensure that users can start interacting with your app as soon as possible.
The Critical Rendering Path (CRP) is the sequence of steps that the browser takes to render content on the screen. To optimize this process, focus on reducing Render Blocking Resources—these are the CSS, JavaScript, and fonts that the browser must load before rendering content.
Here are a few ways to optimize the CRP:
1
2
3
4
5
<style>
/* Inline only the critical CSS */
body { margin: 0; padding: 0; }
.header { background-color: #333; color: #fff; }
</style>
preload
and async
for JavaScript. This helps the browser prioritize scripts that are critical for rendering:1
2
<link rel="preload" href="/styles.css" as="style">
<script src="/important-script.js" async></script>
font-display: swap
in your CSS to ensure that text is rendered using a fallback font until the custom font is loaded:1
2
3
4
5
@font-face {
font-family: 'MyFont';
src: url('myfont.woff2') format('woff2');
font-display: swap;
}
In one project, I faced a challenge where the app’s Time to Interactive (TTI) was over 8 seconds on slower devices. Here’s how I brought it down by 30%:
moment.js
) was included across multiple pages. Replacing it with date-fns and using dynamic imports for date formatting functions reduced the bundle size by over 100 KB.loading="lazy"
attribute, which prevented them from blocking the initial page load:1
<img src="large-image.jpg" loading="lazy" alt="Lazy Loaded Image">
Code-splitting and lazy-loading are essential for large apps with many routes and components. It allows you to split your JavaScript bundle into smaller chunks that are loaded on demand.
React.lazy
combined with Suspense
is a straightforward way to load components lazily:1
2
3
4
5
6
7
8
9
10
11
import React, { Suspense, lazy } from 'react';
const LazyComponent = lazy(() => import('./LazyComponent'));
function App() {
return (
<Suspense fallback={<div>Loading...</div>}>
<LazyComponent />
</Suspense>
);
}
This improves Time to Interactive (TTI) by only loading components when they’re needed, keeping the initial bundle small.
True performance optimization is about understanding the nuances of how your app loads and renders, and going beyond the surface-level metrics provided by tools like Lighthouse. By analyzing your bundle size, optimizing the critical rendering path, and fine-tuning how your app hydrates and loads, you can create a faster, more responsive experience for users on all devices.
Remember, a high Lighthouse score is great, but real-world performance is what your users will actually experience. With the right techniques, you can ensure that your app is not just fast in the lab, but fast in the wild too.