Chapter 5: Life-Changing Diets – Jump Start Web Performance

Chapter 5: Life-Changing Diets

The performance techniques described in this chapter are more radical and could be difficult to apply to an existing project. Fortunately, there are no such limitations when embarking on a new site or app, so we look deeper into CMS issues, JavaScript optimization, DOM handling, server-side rendering, static site generators (SSGs), and development processes.

Evaluate CMS Templates and Plugins

Content management systems such as WordPress don’t generate bloated, badly performing pages … until you start adding stuff!

Free or commercial templates make financial sense. Why employ a developer when an off-the-shelf solution does everything you need for a few dollars? Unfortunately, there’s a hidden cost. Generic templates must sell hundreds of copies—if not thousands—to recoup the development effort. To attract buyers, the developers bundle every conceivable feature. Your site may only use a fraction of those facilities, but they can still be present in the code, so the download weight and processing are affected.

Similarly, be wary about plugins, since their quality and effectiveness vary. The best plugins can improve performance by optimizing database tables, caching data, and removing redundant code. The worst will duplicate assets, make convoluted configuration changes (such as .htaccess files), add unnecessary bloat, and affect responsiveness even though they’re inactive on a particular page.

Always evaluate page cost and performance when considering new templates and plugins. Where possible, choose more lightweight options, even if the purchase price is higher.

Reduce Client-side Code

Blindingly obvious statement alert: smaller files results in faster pages.

Not all assets are created equal, though. 500KB of image data has a relatively low performance hit, since it’s downloaded once, cached in the browser, and positioned on the page. The same quantity of HTML, CSS, or JavaScript has a far bigger impact, because it must be downloaded, parsed, and processed.

Ideally, the number of HTML DOM nodes should be reduced to a minimum. A shallower tree depth means rendering and reflows are performed more effectively. Modern layout tools such as Flexbox and Grid allow you to remove wrapper elements that may have been necessary in float-based designs. Keep the document small and look out for signs of DIVitis!

Similarly, the fewer CSS rules you require, the quicker a document can be rendered. Look our for complex selectors, especially when using preprocessors such as Sass, which expand deeply nested rule sets. Check your compiled style sheet output to ensure it’s as efficient as is practical.

Try to embrace the CSS cascade rather than working against it! A little understanding can reduce code and improve performance. For example, you can set default fonts, colors, sizes, tables, grids, and form fields that are universally applied but can be tweaked for individual components.

Also be wary of using CSS resets, which means having to re-apply default styling to every element. CSS normalization, such as Normalize.css, could be a better alternative, since it makes browsers render more consistently. That said, default styling between browsers is closer than ever.

Optimize JavaScript Code

HTML is a robust technology; even the oldest browsers without HTML5 support will show content. Similarly, CSS can fail to download or have coding errors, but the page remains viewable. By contrast, JavaScript is fragile and computationally expensive. A single error, unsupported command, or long-running task can prevent further code from running.

It’s difficult to recommend JavaScript optimizations, since all applications will be different, but there are a few general tips that could improve performance. That said, be wary of micro-optimizations, which may shave a few milliseconds but aren’t called frequently enough to make a difference. Use your browser’s developer tools to check whether any gains have been achieved.

Use JavaScript Sparingly

If a browser can do something in HTML and/or CSS alone, that should be your preferred option. You can still apply progressive enhancements where necessary (discussed below).

Modern browsers have implemented many regularly used features that previously required scripting, such as form validation, field auto-complete, animations, video, expanding text, modal dialogs, and more. There will be challenges—ask anyone who’s ever tried styling a <select> drop-down—but using a native feature will always be faster and use less code.

Consider the choice of using an HTML <button> verses a <div> as a form submit. The HTML code starts in a similar way:

<button>submit</button>

Styling a DIV in CSS may be easier:

<div class="button">submit</div>

However, the HTML <button>:

  1. offers default styling to look like an OS button
  2. works on all browsers even when CSS or JavaScript fails
  3. works immediately, as the page loads and before JavaScript has started executing
  4. will automatically submit its parent <form> (if validity checks pass)
  5. can be operated with a mouse, touch screen, keyboard, or any other input device
  6. can receive focus, and accepts keypress shortcuts
  7. requires no ARIA roles or other accessibility assistance

A button that’s simulated in CSS and JavaScript requires significant effort, and it will never function as effectively as the native HTML alternative.

Avoid Long-running Tasks

Long-running tasks often trigger unresponsive browser messages, which prompt the user to halt JavaScript execution. Complex processing is best handled by a Web Worker, which allows a script to run in a background thread.

Web Worker scripts are limited. They can’t interact with the page DOM, and must communicate with the main script using a message API, but they’re able to perform Ajax requests and launch their own child workers.

Bind Events Sparingly

Applications can have dozens of event handlers. A handler function is registered to an event when it’s triggered on a specific DOM element—such as running the doSomething() function when a click is detected on the myElement node:

myElement.addEventListener('click', doSomething);

Each bound event has a performance hit. Ideally, you should only add events you require, return from handler functions quickly, and unbind using removeEventListener when an event is no longer necessary.

Also be wary of quick-firing events such as mousemove and scroll, which can trigger rapid and wasteful rerunning of handler functions. One way around this is to use throttling to ensure an event is called no more than once every N milliseconds. For example:

// thottle event to delay ms
function eventThrottle(element, event, callback, delay = 300) {

  let throttle;
  element.addEventListener(event, (e) => {

    throttle = throttle || setTimeout(() => {
      throttle = null;
      callback(e);
    }, delay);

  }, false);

}

// call windowScrollHandler no more than once every 300ms
eventThrottle(window, 'scroll', windowScrollHandler);

Alternatively, debouncing can be used to ensure a handler is only called after the event has stopped being triggered for N milliseconds:

// debounce event until it no longer occurs for delay ms
function eventDebounce(element, event, callback, delay = 300) {

  let debounce;
  element.addEventListener(event, (e) => {
    clearTimeout(debounce);
    debounce = setTimeout(() => callback(e), delay);
  }, false);

}

// call windowScrollHandler when at least 300ms has elapsed since the last event
eventThrottle(window, 'scroll', windowScrollHandler);

Finally, remember to make effective use of event delegation. For example, presume you have an HTML <table> with thousands of cells and want to react to a <td> being clicked. Attaching an event to each cell requires significant processing and would need to be reapplied if the table changed. Instead, you can attach a single event handler to the <table> element and examine the target. For example:

// handle a click on any <td> element
document.getElementById('mytable').addEventListener('click', (e) => {

  let t = e.target.closest('td');
  if (!t) return;

  console.log('clicked cell', t);

});

Analyze Modified Code

It’s rare to encounter code that hasn’t been modified before it reaches the browser!

  • Minifiers attempt optimizations such as rearranging lines or expanding loops.
  • Transpilers such as Babel convert ES6 to ES5 so the code runs in older browsers.
  • Compilers such as TypeScript, CoffeeScript, and Flow convert alternative or superset syntaxes to JavaScript.
  • Projects such as Blazor convert C# to WebAssembly—a low-level, assembly-like language that offers near-native OS performance in JavaScript engines.

All offer stability and performance benefits, but check that the conversion is optimal and that it’s not unnecessarily importing several kilobytes of transpiler library code. For example, consider the following 32-byte ES6 for...of loop:

for (let p of n) console.log(p);

This results in 598 bytes of Babel-transpiled code. Each additional loop adds a similar quantity of code, and none will execute in IE11—which partly defeats the point of transpiling! Options to consider:

  • Use ES5 or more transpiler-efficient ES6 code to achieve the same result.
  • Use differential loading to serve ES6 module-based code to modern browsers and larger transpiled scripts to older applications.
  • Drop support for browsers without ES6 support (primarily IE). Your site or application can remain usable if you adopt server-side rendering and progressive enhancement techniques.

Modify the DOM Effectively

Some modern JavaScript frameworks implement a virtual DOM. As you change page elements, the virtual DOM works out what’s been altered and determines how and when to make modifications. Ultimately, it must still change the real DOM, and you can make similar optimizations to improve performance without the additional overhead of virtual DOM calculations.

Cache Regularly Used Nodes

Regularly used DOM nodes should be stored as JavaScript variables so they don’t need to be re-fetched. The DOM references are retained even when other tree nodes are modified:

const
  main    = document.getElementsByTagName('main')[0],
  heading = main.querySelector('h1'),
  tables  = main.getElementsByTagName('table');

Search from Any Node

Rather than searching the whole tree from document, many DOM methods allow you to start from any node. The example above searches for the first heading and tables within the <main> element.

querySelector() and querySelectorAll() can find elements using jQuery-like CSS selectors. They’re usually slower than getElementById(), getElementsByTagName() and getElementsByClassName(), although the speed difference is unlikely to affect most applications.

Running Benchmarks

Tools such as jsPerf.com provide a way to create code snippets and run benchmarks on any browser to prove the efficiency—or inefficiency—of alternative functions.

getElementsByTagName() and getElementsByClassName() also return live HTMLCollections, which update automatically as the DOM is modified—so that it’s not necessary to rerun the query.

Minimize Reflows

When an element is added, modified, or removed from a page, it can trigger a cascade of layout changes to surrounding elements. For example, increasing a width by 1px could result in a neighboring element wrapping to the next line, which pushes all subsequent content down the page. It’s therefore more efficient to make changes that can’t impact the layout. For example:

  • use opacity and/or transform to translate (move), scale, or rotate an element
  • limit the scope of the reflow by changing elements low in the DOM tree (those without deeply nested children)
  • update elements in their own position: absolute; or position: fixed; layer
  • modify hidden elements (display: none;), then show them after the change has been applied

Batch-update Styles

The following example could cause three reflows:

let myelement = document.getElementById('myelement');
myelement.width = '100px';
myelement.height = '200px';
myelement.style.margin = '10px';

Performance can be improved by appending a class:

let myelement = document.getElementById('myelement');
myelement.classList.add('newstyles');

This applies CSS properties in one reflow operation:

.newstyles {
  width: 100px;
  height: 200px;
  margin: 10px;
}

Batch-update Elements

Try to minimize the number of times you interact with the DOM. An empty DocumentFragment can be used to build elements in memory before applying those changes to the page. For example, you can create an unordered list with three items like so:

// create list
let
  frag = document.createDocumentFragment(),
  ul = frag.appendChild( document.createElement('ul') );

for (let i = 1; i <= 3; i++) {
  let li = ul.appendChild( document.createElement('li') );
  li.textContent = 'item ' + i;
}

// append list to the DOM
document.body.appendChild(frag);

The DOM is only modified on the last line.

Use requestAnimationFrame

The window.requestAnimationFrame() method calls a function just before the browser performs the next repaint—normally once every sixtieth of a second (approximately every 17ms, presuming no other render-blocking processes are occurring). It’s normally used for animating frames in HTML5 games, although running it before any DOM update will be beneficial. For example:

function updateDOM() {
  let p = document.createElement('p');
  p.textContent = 'new element';
  document.body.appendChild( p );
}

requestAnimationFrame( updateDOM );

Consider Progressive Rendering

Rather than using a single site-wide CSS file, progressive rendering is a technique that defines individual style sheets for separate components. Each is loaded immediately before the component is referenced in the HTML:

<head>

  <!-- core styles used across components -->
  <link rel='stylesheet' href='base.css' />

</head>
<body>

  <!-- header component -->
  <link rel='stylesheet' href='header.css' />
  <header>...</header>

  <!-- primary content -->
  <link rel='stylesheet' href='content.css' />
  <main>

    <!-- form styling -->
    <link rel='stylesheet' href='form.css' />
    <form>...</form>

  </main>

  <!-- header component -->
  <link rel='stylesheet' href='footer.css' />
  <footer>...</footer>

</body>

Each <link> still blocks rendering, but for a shorter time, because the file is smaller. The page is usable sooner, since each component renders in sequence; the top of the page can be viewed while remaining content loads. A similar approach is often adopted by Web Components, which encapsulate CSS within the code.

The technique can be less practical in templates where the content dictates the layout (Flexbox and tables), since reflows are triggered more frequently as the page loads. Grid-based page layouts are generally more suitable.

There’s some variation in how browsers treat progressive rendering, but the worst-case scenario is that the browser blocks rendering until all discovered CSS files have loaded. That’s no worse than loading each in the <head>.

Progressive rendering could benefit large sites where individual pages are constructed from a varied selection of different components.

Use Server-side Rendering

Which process is quicker?

Process 1 (typically used by JavaScript frameworks):

  1. Request a URL.
  2. Respond with a (mostly) empty HTML file.
  3. Download and execute JavaScript.
  4. Use Ajax or similar techniques to fetch content according to the URL.
  5. Load the content into the page body.

Process 2 (old-school method):

  1. Request a URL.
  2. Respond with the full HTML.

Server-side rendering is always quicker for the initial page load.

Loading a second page can be faster in Process 1, since it’s able to start at step 4. Assets such as style sheets, JavaScript, and images may already be available and parsed. Unfortunately, a large proportion of visitors may only view a single page, and the payload is higher because a larger quantity of JavaScript is necessary.

This is a better-performing process:

  1. Request a URL.
  2. Load HTML directly from the server into the browser.
  3. Download and execute JavaScript. Some rehydration may be necessary to initiate components with HTML data.
  4. Use Ajax or similar techniques to fetch and populate content according to URL navigation changes.

This can be more difficult to manage, since not all JavaScript frameworks provide server-based rendering capabilities using Node.js, PHP, Ruby, Python, and so on.

Do You Need a JavaScript or CSS Framework?

A CSS and/or JavaScript framework can provide a good development structure for teams working on larger sites or applications. However, most are general-purpose tools: they provide a range of features you may not need or may have to adapt. Optimizing performance is often difficult because the core code isn’t under your control.

While a framework is certainly useful for prototyping, always question whether it’s necessary for the final site or application. How much weight does it add? Will it improve performance? Can it be updated easily? What happens when it’s eventually abandoned?

Invest time in researching the choices. Without investigation, every application looks like a nail to developers who understand a specific hammer. You should certainly avoid using more than one framework—with the possible exception of server-side options, or compilers such as Svelte, which remove themselves from production code.

Even once you settle on a chosen framework, there may be modular or lightweight alternatives such as Preact instead of React, or bling.js instead of jQuery.

Ultimately, the most efficient and adaptable framework will be one written specifically for your application.

Use a Static Site Generator

Most people start web development by creating (static) HTML, CSS, and possibly JavaScript files. The resulting assets can be hosted anywhere and are fast because they don’t use server- or client-side processing.

The main downside is content management: adding a new page could involve changing hard-coded navigation menus on every page in the site. At this point, developers often turn to server-side languages or a database-driven CMS, both of which have their own set of challenges.

What if you could create a fast, static site but make cross-site changes programmatically when something is added or removed? That’s exactly what a static site generator (SSG) does. It takes content—typically defined in markdown files—and builds a set of static web pages. The build-time process can construct menus, import images, generate styles, and so on, and can be rerun when anything changes. The resulting site is decoupled from a server and is often referred to as using a JAMstack: JavaScript, APIs, and markup.

Most SSGs build a set of folder-based HTML files with associated assets that can be uploaded to any web server capable of serving static content. The Ruby-based Jekyll was one of the first SSGs, but StaticGen.com lists dozens of alternatives for a range of languages. Options such as Gatsby also create React-based JavaScript applications rather than HTML files. (Whether or not that’s a benefit is another matter!)

A static site can offer the best site performance, since it’s rendered once, then delivered to all users as is. There are no server-side dependencies, reliability is improved, version control is easy, and security issues can be eradicated.

There are some downsides:

  • configuration and setup takes time and is more difficult than a CMS
  • SSGs are rarely suitable for non-technical editors
  • there’s no concept of user roles or permission rights
  • site consistency can be more difficult to enforce, as editors can add any client-side code
  • the rebuild process can be slow, especially on larger sites

SSGs are ideal for sites that change relatively infrequently, but many of the issues can be overcome by importing data from a headless CMS or automating the build process.

Use a Build System

Even the most conscientious developer can forget to minimize a CSS file, optimize an image, or remove debugging console statements. Whatever technology you use to create a site or app, a build process can automate mundane tasks to ensure there are no oversights. Additionally, they can run tests, verify code, and deploy to staging or live servers.

Creating a build process can take a day or two, but it should save time over the long term. Popular generic build tools include Gulp.js, Grunt.js, Broccoli.js, and Brunch, which allow you to define and run tasks manually or when files are changed.

Alternatively, you could opt for web-specific module bundlers such as webpack or Parcel, which understand HTML, CSS, and JavaScript so they can parse and build optimized code, through operations like these:

  • dead asset elimination
  • code splitting and dependency handling
  • ES6 to ES5 transpiling
  • minification
  • source map generation
  • cache-busting
  • live reloading
  • enforcing performance budgets (discussed below)

Module bundlers often promise zero configuration … although the reality may be somewhat different!

A few tips to get started:

  • Choose a build system and stick with it for a while.
  • Automate the most frustrating tasks first.
  • Try not to overcomplicate your build process. Spend an hour or two creating an initial setup, then evolve it over time.
  • Do as much during the build process as possible. For example, an HTML template could be partially constructed from known data and partials rather than parsing everything at render time.

Further reading:

Use Progressive Enhancement

Progressive enhancement is a development approach rather than a technology. Each site or app feature starts with a baseline minimum viable implementation—perhaps an HTML-only solution. Enhancements are then added progressively when they’re supported by the user’s device. Consider a simple search box:

  1. The base solution is an HTML <input type="search" /> field which, when a string is entered, triggers a new page load showing search results.
  2. HTML5 constraint validation can be applied to ensure searching only occurs when a minimum of three characters has been entered.
  3. CSS styles are applied, showing basic formatting such as fonts, colors, borders, etc.
  4. When the field has focus, CSS animations could enlarge the field, show a submit button, etc.
  5. JavaScript could show suggestions as the user types characters.
  6. JavaScript could show a simple list of search results without the user having to leave the current page.
  7. PWA service workers could be used to cache suggestions and search results for later use.

Where necessary, the code tests that a feature is supported before attempting the enhancement. For example, suggestions could be implemented when JavaScript is running, events are supported, and the HTML5 <datalist> element is available.

Adding Missing Features with Polyfills

It’s often possible to use a polyfill to add a missing feature to browsers without native support. This can range from additional prototypes, such as the String.padStart() method, through to full APIs, such as one to provide geolocation support using IP lookups.

Polyfill.io provides a custom set of polyfills. However, be wary about the performance cost of attempting to polyfill everything. It may be preferable to offer IE users a fast, rudimentary feature than a slow, fully polyfilled experience.

In the search box example above, progressive enhancement offers the following benefits:

  • The search box is device agnostic and works in all browsers—old, current, and those released tomorrow.
  • Assuming the HTML loads, the search box is always operational. This includes the period before CSS and/or JavaScript is downloaded and parsed. In performance terms, the feature is responsive immediately.
  • The user gets the best possible experience their device can handle. Performance isn’t affected when an enhancement can’t be added.
  • The search box is fault-tolerant: any enhancement can work or fail without breaking the system. It doesn’t matter whether CSS and/or JavaScript are blocked, are slow to arrive, or fail to download.
  • It’s the responsible option, and doesn’t require more development effort in most situations.

The approach has no downsides. Progressive enhancement only breaks when:

  1. It isn’t considered from the start. It may be difficult to retrospectively enhance a feature that already requires a high base-level of CSS and JavaScript.
  2. You try to support all browsers equally. It’s futile to expect a decade-old version of IE to behave the same as a modern application. Progressive enhancement means you never need to worry about old browsers. Their users may not receive the best experience, but the feature remains usable.

Adopt a Performance Budget

A performance budget imposes a limit on related metrics. Typical options include:

  • quantity-based limits, such as the maximum number of fonts, images, scripts, etc.
  • time-based limits, such as the first meaningful paint or interactive times
  • rule-based limits, such as a minimum performance and accessibility score in Lighthouse audits

You should experiment and discuss options with stakeholders to establish baseline criteria, such as:

  • the total size of a page must not exceed 500KB
  • a single image must be no more than 150KB
  • the home page must deliver less than 100KB of JavaScript
  • all pages must be readable within five seconds on a mid-range mobile device operating on an average 3G connection

Ideally, these criteria can be added to your build process. Tools such as the Lighthouse module and file size plugins can report—and potentially block—any deviation from the budget. Exceeding the budget means you must either:

  1. optimize an existing feature/asset
  2. lazy load an existing feature/asset on demand
  3. remove an existing feature/asset
  4. reject the new feature/asset

The limitations can help teams prioritize features. Increasing the budget should always be tougher than implementing another solution! For example, a budget increase must be discussed, justified, and agreed to by a two-thirds majority at a monthly progress meeting!

Performance budget tools:

Create a Style Guide

A style guide is a set of agreed brand, content, design, and coding standards for teams generally working on large codebases developed over a long period. A good style guide promotes consistency and illustrates how developers should approach solutions. Front-end components can be demonstrated with example code that shows styling, animation, functionality, and restrictions. The benefits include:

  • new team members can become productive quickly
  • components are reused: developers are less likely to introduce their own HTML, CSS, and JavaScript
  • it becomes easier to update, maintain, and improve component performance when the same code is used throughout
  • code can be tested and quality assurance becomes simpler
  • users receive a consistent UI experience

A style guide can be as rigid or as flexible as you require. It’s often best to develop it as a set of HTML pages that can demonstrate code and be updated quickly. Example documents are available from styleguides.io.

Simplify and Streamline

Performance problems often start because stakeholders equate more features with more customers. This is rarely the case; most people prefer simplicity. They’re not using your site/app on a daily basis and just want to get a task done quickly and easily.

Average page weight reached 2MB because developers let it happen. We’re under pressure to deliver more in a shorter time, but are we doing the job effectively when it results in a slow, clunky application no one wants to use? Few clients will understand the intricacies of web performance, so it’s our responsibility to use efficient coding practices and to highlight potential pitfalls in layman’s terms.

  1. Be wary of the performance cost of any added features.
  2. Use analytics to monitor and identify little-used features.
  3. Fully remove unnecessary features or replace them with sleeker, lightweight alternatives.

Look after the bytes and the megabytes will take care of themselves!

Learn to Love the Web

The Web evolved from a document publishing platform to an application delivery system that revolutionized the way we distribute and use software. Unfortunately, this has resulted in an alarming tendency to over-engineer solutions when simpler options could be more effective. Rather than choose a native HTML control, we import the latest JavaScript module. Instead of adding a few styles, we copy vast quantities of CSS from Stack Overflow and Bootstrap.

If there’s one piece of advice to take away from this book, it’s learn the basics. Somewhat contradictorily, HTML and CSS are either disregarded as too simplistic to warrant respect or considered impenetrable technologies that must be fixed using JavaScript. Yet they’re the fundamental building blocks of the Web:

  • HTML5 has around 120 elements. Half of those will rarely be used, but there’s usually a better alternative to <div> and <span>.
  • There are almost 400 CSS3 properties and more are being added. No one could name them all, but they’re modularized. The foundations can be learned quickly, but experimentation and experience is required to understand the concepts.

Learning HTML and CSS will make you a better web developer and advance your JavaScript skills. A little knowledge will considerably improve your application’s performance.