How To Make BigCommerce Fast

I’ve spent the last few months researching and testing how to speed up a BigCommerce store. It’s still a work in progress and contains some hacks. But it works. We can use Page Speed Insights to show the results of my efforts for both mobile and desktop devices.

At the moment, only the category pages have been fully optimised. There’s quite a bit of work to make it robust and safe to use. Think of it as a prototype of what can be done. You can see all my changes on my fork of Cornerstone in the speed2 branch.

The Life Of A Page Load

When a user clicks on a web page link or enters it into the address bar, the browser goes through a sequence of tasks and milestones to eventually display the page to the user. Here I’ll talk about the most significant milestones and ways to speed each one up to result in the page loading faster.

To scare you from the start, here’s a W3C graph showing the timing attributes as a user navigates to a page. I’ll be talking about many of these steps with ideas for speeding them up.

W3C Navigation Timings

BigCommerce Cornerstone 6.3.0 Performance

All my modifications on Cornerstone 6.3.0. As a baseline, here are some Page Speed Insights results of a product page using the original theme.

Not the worst scores in the world, but both devices are failing Largest Contentful Paint (LCP), which Google uses to determine if a page is a good experience and gets rewarded with a ranking boost.

And scores typically go downward as widgets and features are added to pages.

The main issues reported are primarily beyond our level of control (but I had a go):

  • Reduce unused JavaScript (several js files added by the theme)
  • Reduce unused CSS (mainly the themes bundled CSS file)
  • Eliminate render-blocking resources (CSS files and web fonts)
  • Reduce initial server response time (that’s BigCommerce’s servers)

These lab tests can vary by a lot, making it quite hard to be conclusive about any results. Over time I should have more real-world data to confirm if these changes improved the page experience for most users.

Let’s see how we can improve things.

Making Connections

Before a browser can talk to a new domain/hostname, it must make a few requests to set up a connection. A DNS request to find out what server the domain is hosted on (IP address), then it needs to establish a TCP/IP connection to that server using the correct port number, and finally, it makes that connection secure.

This has to be done with every new domain used during the loading of a page. And it can add 100s of milliseconds to the delay in loading the first resource from each new domain discovered.

You can see the time used up by connections in the previous WebPageTest waterfall diagram as the green, orange and purple bars before some requests.

For a BigCommerce page, WebPageTest indicated it took about 175ms to make the initial connection to the page’s domain.

These connections have to be made if you want to access a domain, and there is nothing that can be done about the delay when loading the initial HTML for a page. But it is possible to get further connections set up in advance of a domain being needed. This is by using some resource hints at the top of the HTML. Here’s what BigCommerce do by default.

<link rel="dns-prefetch preconnect" href="https://cdn11.bigcommerce.com/s-65027" crossorigin>
<link rel="dns-prefetch preconnect" href="https://fonts.googleapis.com" crossorigin>
<link rel="dns-prefetch preconnect" href="https://fonts.gstatic.com" crossorigin>

There are a few mistakes here.

  1. For compatibility with Safari, the dns-prefetch should not be done in the same tag as the preconnect.
  2. I don’t believe the folder in the first connection has any meaning. It’s expecting domain level connections.
  3. There is no value in doing this for instantly requested resources (the first two).

The best practice is to do a preconnect and the dns-prefetch in a separate link tag, in that order. This is mainly to be backward compatible. In some cases, you will also need to add crossorigin. Test each way, as it will only work when done right.

Here is what I think BigCommerce should do:

<link rel="preconnect" href="https://fonts.gstatic.com"  crossorigin="anonymous">
<link rel="dns-prefetch" href="https://fonts.gstatic.com"  crossorigin="anonymous">

Pre-connection is of most value when the browser does not instantly know that a request to a domain will be needed. The browser can set up the connection in advance so that the request goes out faster when needed.

These are also browser hints, not directives. This means the browser will decide if and when it will do the preconnect.

Lighthouse recommends only having a few preconnect tags, and this preconnect article also warns about overuse.

WebPageTest is good at highlighting time used up by making connections. Here, the connection to fonts.gstatic.com was made in advance, so fonts were requested that bit earlier. But for the other pre-connects BigCommerce makes, it made no difference.

Our Tag Rocket app can add a lot of delayed requests to different domains. As these requests are made late in the process, I decided to test preconnecting them all. I added a little more than a few at the end of the head section. This lets the critical resources get first in the queue with these low priority connections at the end.

You can see how some connections were made early, but I did not see any noticeable change in the total timings of the page load. This may be because WebPageTest emulates a slower device where the network and CPU are busy, and therefore it has little time to do any pre-connecting:

I suspect this would have more benefits on faster devices and connections.

Consider doing the preconnect/dns-prefetch for any resources on different domains that are requested later on, especially if they are related to how the page looks.

Time To First Byte (TTFB)

When a user wants to see a page, the first thing the browser does (after making the connection) is request the HTML for that page. The HTML is the scaffolding required to know how to build the page, and it contains the basic layout and references to the resources (images, scripts, style sheets) needed to complete it. Once the browser has made the HTML request, it can’t do anything until the HTML is returned, as it knows nothing about the page. The time it takes from the request to the first part of the response is TTFB.

Browsers are smart. As they start receiving the HTML, they can begin processing it. They don’t need all the HTML to get the ball rolling. For example, URL based resources referenced at the top of the HTML can be requested well before the whole of the HTML is received. So TTFB is the point where things start getting done.

Unfortunately, many CMS systems tend to have a poor TTFB. Often the time-consumer is for the website’s server to build the HTML. These days CMS platforms have a pretty complex set of code to create the HTML, and this complexity means they have to build all the HTML before they can start sending it back to the browser. Making TTFB late on and not much before the response has been fully sent.

For BigCommerce, TTFB with WebPageTest is over 500ms. Then the content downloads almost instantly. Add the connection time, and we see it is often over a second before any HTML is sent to the browser. One second lost where nothing is done.

I’m making this point that we would speed things up if TTFB were smaller, letting the browser process the page earlier. Especially if it can start requesting resources.

And this is where section.io come in. Their BigCommerce TTFB solution can massively reduce the TTFB by almost instantly returning a cached version of the top part of the HTML from their CDN. A point in the HTML is decided as a marker for where the caching should end. Anything above it is returned very quickly to the browser, while the rest is returned at the usual speed. This means the browser can process what is in that cached part very early on.

To take advantage of this, you want to include the required URL resources in that top section. So the browser starts loading them. Any CSS, font and script files. And that’s what I did – Chrome’s Network report with no throttling is a great way to show what it does:

It has loaded all the CSS and JS files well before the HTML has been completed. If I show the same thing on a store without section.io, you see how these critical resources don’t even start to load until after the HTML is completed.

I have a more detailed article on how to best set up section.io for BigCommerce.

If it is not possible to move the resource into the cached area, you can add preloads.

Preloading Resources

The preload tag tells the browser to load a resource now in anticipation that it will be wanted later. Its main value is when the browser will not know at the time of the preload that the resource will be needed. This means the resource is available earlier, and the page loads quicker. This is especially valuable in you have the above TTFB solution where you can preload resources in the cache section and get resources loading very early on.

Here are some of the reasons to add preloads:

  • Preload the main image for a page to reduce LCP
  • Preload web fonts to avoid flicker when they are shown
  • Preload important scripts placed in the footer (especially if they are blocking)
  • Preload scripts that affect the display

I also fixed an issue where important images were being lazy-loaded. Read more about that in my BigCommerce LCP quick fixes article.

The result of this was a far earlier start to loading the essential resources. CSS fonts and the main image were loaded from the TTFB moment, which meant we have now reduced LCP (green dotted line) by over 600ms. In fact, LCP happens almost instantly after the HTML was completed. The only delay is due to the CPU having to put the page together.

Fetch Priority

Chrome has recently released a new attribute called fetchpriority. It can be applied to link, img, iframe, and script tags to tell the browser to load the resource in a high or low priority.

Adding it to an img tag has a similar effect to doing a preload.

Async and defer scripts are a low priority by default. fetchpriorty can be used to get them to load more quickly.

link preload tags for images are of a low priority, so they need to be placed high up in the head if they are going to load instantly. fetchpriorty can be used to make them high priority, making their location less critical.

Stop Render Blocking

A script tag without async or defer is a render-blocking resource. BigCommerce typically includes a few in the footer:

<script src="https://cdn11.bigcommerce.com/***/theme-bundle.main.js"> </script>
<script type="text/javascript" src="https://cdn11.bigcommerce.com/shared/js/csrf-protection-header-***.js"></script>

A render-blocking script means it has to be loaded and executed before the browser can continue to put together the rest of the HTML for the page. Render blocking can cause significant delays in processing subsequent requests, which means it can hurt the most when in the head section.

I managed to remove all render-blocking scripts.

  • preloaded and async deferred the fonts script
  • inlined the fonts CSS
  • inlined critical CSS and applied a trick to make the main CSS preload and non-blocking
  • preloaded and added the main script via JavaScript, so it did not block

The inlined critical CSS is only applied to the category pages at the moment. This introduced a new problem. Rendering was done so early that on slow connections the fonts had not loaded yet, and that caused an annoying flicker. To solve that I have the page hidden until the primary fonts have loaded. I need to work on a more robust way to do this, as a failure could mean the page never shows!

If you apply async or defer to a script, it is no longer render-blocking. It loads in the background. With async, it executes as soon as it has loaded. With defer, it executes just before the DCL event.

DOM Content Loaded (DCL)

The DOM (Document Object Model) is a computer’s representation of the HTML for a page. It uses a tree-like structure that matches how HTML elements contain each other. Scripts can manipulate the DOM to change the page.

Once the browser has processed all the HTML and render-blocking resources to create the DOM, it is ready to fire the DOMContentLoaded event. Just before it does that, it will execute any deferred scripts that have already been loaded.

This is an important moment because scripts can now safely play with the DOM and alter the page. For example, a review widget may add review stars and reviews to the page.

Scripts that alter the page typically wait on the DOMContentLoaded event before they do their thing. So if this event is delayed, these widgets are slow to show up on the page.

We want to make DCL happen as quickly as possible: Avoid slow render-blocking files, use preconnect and preload to get resources before they are asked for and needed, and make scripts async, so they don’t block the event.

Scripts that only use defer still delay DCL. This is because they have to be loaded to execute just before DCL. If it’s safe to do so, you can add async to the tag. This means it will not block if the script is not loaded on time. async behaviour trumps defer if a browser supports both.

Adding async or defer to a script can sometimes be tricky. It loads and runs in the background, so nothing can be dependent on it. For example, users of jQuery have to have it loaded and run before their code runs. And unfortunately, I often see a BigCommerce site that is render-blocking with multiple copies of jQuery. A tricky one to solve.

Many tag providers use a special code pattern to async the main script while letting people queue up their commands for when it is loaded. It’s so frequently used that our TagRocket app has a special function to do it for us. You can even specify a function to run when the script has loaded.

If you add defer or async to a script tag, make sure you test things to make sure it still works.

Our tagging system, by default, gets going after DCL. All the essential stuff should have been done by then, and now it’s time for the background tasks like tracking.

The next milestone is when the page is classed as fully loaded.

OnLoad

And we get to the final event of page loading. The OnLoad event.

This basically happens when the browser has run out of things to do. All known resources are loaded and run. This is when the spinny load thing stops spinning and the user can assume the page is ready.

Often this is delayed by the lower priority tasks like setting up pixels and sending tracking data to the various channels. preconnect and preload can help, but don’t let these low priority resources get in the way of important stuff.

Another possible delay to onload is if you have a lot of images. This can be solved by…

Lazy Loading Images

Sometimes a page can be long and contain lots of images. By default, they all have to be loaded before we cross the finish line and fire OnLoad.

Modern browsers now natively support a lazy-load option. The browser can decide which of the lazy-load images it needs now and forget about the rest for later.

<img loading="lazy" src="image.jpg"/>

The BigCommerce Cornerstone theme currently uses a script (lazysizes) to support lazy-loading, as the built-in method is quite new.

I created a highly modified version of BigCommerce’s responsive image code (responsive-img.html):

  • Switch to native lazy-loading if the browser supports it
  • Only load lazysizes if it is required
  • Support for LQIP images even when using native lazy-loading
  • More control over how responsive images can be added
  • Support for the fetchpriority attribute
  • Backwards compatible

I need to review all images using the code to ensure they use optimal sizes and are not lazy-loading prominent images.

Note that using lazysizes based lazy-loading solutions on images at the top of the page can negatively affect your LCP. From what I’ve seen, the native lazy-loading has less of an issue, especially if you preload the important images. So the switch to native where possible means most users will not have an LCP problem if they accidentally lazy-load an important image.

Delay Tracking Scripts

Scripts typically don’t need to load quickly if they don’t affect the look of a page. They can take bandwidth and CPU time from your more critical resources. Delaying them can move them out of the way, but at the same time, you don’t want to delay them too much. They still need to do their job in a reasonable amount of time, before a user moves on.

Adding defer to a script will make it low priority and delay it to run just before the DCL event. Note that most browsers will ignore the defer if you also async it.

An artificial way to delay a script is to listen to the page’s events like DCL or OnLoad and dynamically add the script when they fire. It’s even possible to wait on the LCP event on Chrome browsers; that way, you can ensure tracking scripts do not delay LCP.

If you only want to delay execution then you could preload the script but only add the real script tag when you want it to run.

BigCommerce’s built-in Google Analytics solution can slightly delay LCP. It loads early and can use some CPU time, which may push LCP back. Unfortunately, there is no easy way to delay it.

Conclusion

Here’s a waterfall of the full load process on my machine with no throttling. The result in this case (fast connection) is that the critical path to LCP is BigCommerce’s server response time for the HTML. And a bit more for the CPU to build the page. DCL and Load are delayed by running the theme’s main JavaScript file. And we are fully loaded within one and a half seconds.

If I throttle my connection to emulate a fast 3G connection, we still get the benefits of all the early loading, but we see that the fonts are loaded after the page is drawn, making the fonts a candidate for LCP delay. And now, the OnLoad event is delayed by loading the main CSS file and images.

For Largest Contentful Paint, it went from 3.9 to 2.3 seconds on a mobile and 1.3 to 0.6 on a desktop. That should get everyone in Google’s good books.

And the user gets a very smooth transition without any layout shifts or font jitters. The page shows up with everything in place, even on slower connections.

You can play with the test page but remember it is a test site, and I will be actively working on other things that will affect the Page Speed Insight scores, like adding lots of scripts for our Tag Rocket app.

Hopefully, some of these ideas will make it into Cornerstone so that BigCommerce can potentially become the fastest eCommerce platform in the world.