For our BigCommerce Tag Rocket subscribers with GA4 switched on, you are already most of the way there. Just follow our simpler article on how to set up BigQuery and the report.

Update: In August 2022 the web.dev team updated their script to v3 in a way that was incompatible with the code in this article.

As a quick fix, change the referenced script from ‘https://unpkg.com/web-vitals’ to ‘https://unpkg.com/[email protected]/dist/web-vitals.attribution.iife.js’.

I’ve now updated the article to take advantage of the v3 script improvements, which include the addition of INP.

They also changed the Needs improvement rating from ‘ni’ to ‘needs-improvement’. So if you upgrade to the new code in this article, you may need to update your report generators.

This article includes full instructions to set up my (Tony McCreath) Core Web Vitals report using your users’ Core Web Vitals experience on your website. Set up is via Google Analytics 4 (gtag or GTM), BigQuery and Google Looker Studio.

Google Data Studio - Core Web Vitals
Core Web Vitals report overview – Click on the image to open the live sample report

Table of contents

  1. They stole my thunder
  2. What is this Core Web Vitals thing all about?
  3. Adding the code to your site (for GA4 using gtag)
  4. Adding the code to your site (for GA4 using GTM)
  5. Adding GA4 definitions (optional)
  6. Create a BigQuery materialised table
  7. Schedule updates of the materialised table (optional)
  8. Create a Data Source in Looker Studio
  9. Make your own copy of the Core Web Vitals report
  10. Using the reports
  11. Feedback

They stole my thunder

At Google I/O 2021 the web.dev team made a Core Web Vitals presentation explaining how to Measure and debug performance with Google Analytics 4 and BigQuery. Basically, how to set up cool Core Web Vitals reports using your own website’s visitor data. This was backed up with a technical article explaining how they did it.

The web.dev team at Google I/O
This image has an empty alt attribute; its file name is Shocker-1024x585.jpg
It takes a lot to stun me

I was stunned. I’d spent the last few days working on the same thing for a presentation that I’m going to make at the DeepSEOcon conference later this year. They’d stolen my thunder 😲

I paced around trying to work out what to do next.

Hmmm

Eventually, it became clear. web.dev are the rulers of Web Vitals and the code I use to track those metrics. I had to pull my stuff inline so that it worked with their solution. i.e. make it so people can use my report on the same setup. Time to get to work.

Tony Computing
Should I use my modified Commodore 64 or the BBC? Or both. Let’s async multitask this.

web.dev’s article details the technical work involved. This article provides the specific steps you can follow to set it up if you are using the standard gtag or GTM implementation for GA4.

What is this Core Web Vitals thing all about?

Tracking LCP status over time. Green needs to cross the 75th percentile line for a good result.

Core Web Vitals are part of the upcoming page experience ranking signals from Google. They measure real user experiences as they view pages of a website via the Chrome browser (CrUX). Website owners want to give these users a good experience to benefit from the upcoming ranking factor.

Tools like Lighthouse and the Chrome performance report provide lab data. i.e. the developers’ own experience. This is good for detailed analysis and testing but does not accurately reflect the experience of your real visitors, who will have different network connections from different locations with different devices. Users also interact with the pages, unlike these testing tools. Interaction is required to calculate FID and affects CLS.

Lighthouse
Lighthouse reports don’t get FID, and their CLS may be low.

CrUX data gives you a real user view of the web vitals (field data) and can be accessed via tools like Page Speed Insights. However, this data is highly aggregated, so you only get a high-level view of your performance. In many cases, you will not get page-level data and will have to work from origin (domain) level data. The results are also an average over the last 28 days meaning there is a big lag from making a change to fully seeing its effect in the report. Pages need to be visited by a threshold of opted-in Chrome users in the 28 days to be included in the report.

Page Speed Insights
No field data or origin data for a popular page of mine. I’m not popular enough, it seems.

The Google Search Console has a Core Web Vitals report that also uses the CrUX data. This means it suffers from the same limitations like the 28-day lag. When there is not enough data for individual URLs, the Search Console reports on aggregated sets of URLs or even just the whole site. This means you often don’t see data at the page level and that is why people often see sets of URLs change from good to bad at the same time. The reported URLs are often samples from these sets.

It’s also worth noting that the Core Web Vitals report is based on CrUX data and not the Google search index. It has nothing to do with the page being indexed for search. It’s not uncommon for low trafficked sites to see no URLs in this report due to a lack of CrUX data, and drastic changes in URL counts can happen. This does not mean those URLs are no longer performing in search.

Search Console Core Web Vitals indicating that all URLs are grouped under the one origin level score

Another option is for website owners to directly gather their own Core Web Vitals data for their own site visitors. The advantage is that this includes data at the page view level for all visitors that can be tracked. This granular data makes it far easier to quickly spot where issues are and how improvements are affecting your metrics.

This article details how you can set this up on your own website to generate reports in Google Looker Studio that help you fix your Core Web Vitals issues. Let’s get started…

Adding the code to your site (for GA4 using gtag)

This solution assumes you have already implemented your GA4 tracking via gtag. See the next section if you use GTM. For BigCommerce stores using our Tag Rocket app, this is already done for you (and more) when you enable the GA4 tag.

web.dev has a web-vitals github repository that provides a lot of details on code you can use to track the Core Web Vitals metrics. They have also documented some extra code to help you Debug Web Vitals in the field. We will be using all that code with some additions from me.

Cumulative Layout Shift Causes Report
Digging into the details of CLS issues, down to the on-page elements, thanks to the debug script

You need to place the following code on all pages. It sends all your Web Vitals events to your GA4 property. The code can go anywhere. I suggest just after your base GA4 code.

We’ve found that tracking page types can be very useful in segmenting the data and narrowing down which parts of the website are having trouble.

Core Web Vitals By Page Type Report
Looks like my search results pages and my tools page have a bit of a CLS issue. I should look deeper into them.

I also think gathering the effective connection type of a user (e.g. 4g), if they have requested to save data, and the display width and height can be of value.

To enable these, you need to alter your core gtag to send the ‘page_type’, ‘effective_connection_type’, ‘save_data’, ‘width’ and ‘height’ parameters. Changes are in green. Don’t forget to replace ‘PAGE TYPE NAME’ (in red) with your code to dynamically get the page type name. How you set page_type is down to your CMS and how you want to segment your pages.

<script>
function getEffectiveConnectionType(){
    var connection = navigator.connection || navigator.mozConnection || navigator.webkitConnection;
    
    if(connection && connection.effectiveType) return connection.effectiveType;

    return 'unknown';
}
function getSaveData() {
    var connection = navigator.connection || navigator.mozConnection || navigator.webkitConnection;
    
    if(connection && connection.saveData) return connection.saveData;

    return 'unknown';
}
</script>
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async="async" src="https://www.googletagmanager.com/gtag/js?id=G-0BQR1PRHYJ"></script>
<script>
  window.dataLayer = window.dataLayer || [];
  function gtag(){dataLayer.push(arguments);}
  gtag('js', new Date());

  gtag('config', 'G-0BQR1PRHYJ', {
    page_type: 'PAGE TYPE NAME',
    effective_connection_type: getEffectiveConnectionType(),
    save_data: getSaveData(),
    width: window.innerWidth||document.documentElement.clientWidth||document.body.clientWidth,
    height: window.innerHeight||document.documentElement.clientHeight||document.body.clientHeight
 });
</script>

Tracking page types can be useful for reporting on a lot of things. GA4 also has reports that support the ‘content_group’ parameter, which I currently set to the same value.

Once you have added that, you will be sending the same Web Vitals events and parameters as used by the solution web.dev provided. This includes the standard metrics parameters (metric_id, metric_value, metric_delta), the debug parameters (debug_target, debug_event, debug_timing, event_time), the rating for the metric (metric_rating) plus the optional page_type parameter.

Adding the code to your site (for GA4 using GTM)

Simo Ahava has written a great article (as usual) on using GTM to send CWV events to GA4, which uses a Custom Template he developed. Unfortunately, it only does the basic metrics due to a limitation of custom templates, and it uses a different set of GA4 parameters to those used by web.dev and me. This section provides an alternate GTM solution that supports the debug info and uses parameters compatible with the web.dev report and mine.

I’ll assume you are familiar with GTM and have added the GA4 Configuration tag already.

Create a Custom HTML tag for all pages that contain the following code. It’s very similar to the gtag one, but it sends the Core Web Vitals data to the dataLayer, and the JavaScript is downgraded to work with GTM. I’ve followed Simo’s dataLayer event naming convention so that it should make our solutions compatible.

You also want to add a tag sequence to this tag to make sure the GA4 Configuration tag fire before this one. We don’t want to send events to GA4 before it exists.

GTM Core Web Vitals Custom HTML Tag
GTM Core Web Vitals Custom HTML Tag firing after the GA4 Configuration Tag

We now need to define all the user-defined variables that we send in the dataLayer. Again I’ve followed Simo’s lead on the convention. Note that my report does not use the rounded values, but I’ve kept them for compatibility with other solutions.

Variable nameData Layer Variable Name
DLV – webVitalsMeasurement.namewebVitalsMeasurement.name
DLV – webVitalsMeasurement.idwebVitalsMeasurement.id
DLV – webVitalsMeasurement.valuewebVitalsMeasurement.value
DLV – webVitalsMeasurement.deltawebVitalsMeasurement.delta
DLV – webVitalsMeasurement.valueRoundedwebVitalsMeasurement.valueRounded
DLV – webVitalsMeasurement.deltaRoundedwebVitalsMeasurement.deltaRounded
DLV – webVitalsMeasurement.debugTargetwebVitalsMeasurement.debugTarget
DLV – webVitalsMeasurement.debugEventwebVitalsMeasurement.debugEvent
DLV – webVitalsMeasurement.debugTimingwebVitalsMeasurement.debugTiming
DLV – webVitalsMeasurement.eventTimewebVitalsMeasurement.eventTime
DLV – webVitalsMeasurement.ratingwebVitalsMeasurement.rating

We’re getting there. Isn’t GTM meant to make things easy?

Time to create a trigger for the coreWebVitals event. Like this…

GTM Event Core Web Vitals
The key thing is to name the event ‘coreWebVitals’.

We’re now on to the GA4 event tag itself. Here we use our new trigger and add all the parameters we want to send.

First, set the Event name to {{DLV – webVitalsMeasurement.name}}

The following table shows what to send, so it works with the web.dev report and mine.

Parameter nameValue
metric_name{{DLV – webVitalsMeasurement.name}}
metric_id{{DLV – webVitalsMeasurement.id}}
metric_value{{DLV – webVitalsMeasurement.value}}
value{{DLV – webVitalsMeasurement.delta}}
metric_delta{{DLV – webVitalsMeasurement.delta}}
debug_target{{DLV – webVitalsMeasurement.debugTarget}}
debug_event{{DLV – webVitalsMeasurement.debugEvent}}
debug_timing{{DLV – webVitalsMeasurement.debugTiming}}
event_time{{DLV – webVitalsMeasurement.eventTime}}
metric_rating{{DLV – webVitalsMeasurement.rating}}
GA4 - Event - Core Web Vitals
GA4 – Event – Core Web Vitals

Like with the gtag implementation, we recommend sending a page_type parameter to GA4 so that you can segment your reports. How you determine the value of the page_type is down to you. You will then have to send it in the dataLayer, create a variable for it, and add it to your GA4 Configuration fields.

GA4 Configuration Page Type
I get my page type from a schema.org based variable I send to the dataLayer on all pages

Setting up the effective connection type requires the following Custom JavaScript variable called ‘Effective connection type’ to be created:

You can then add it as a field called effective_connection_type in your GA4 Connection tag.

GA4 Config Effective connection type

You can follow the same process for save data using this JavaScript variable:

And if you want, you can follow the same pattern to add ‘width’ and ‘height’.

All events will now return page_type, effective_connection_type and save_data

I’ll leave it up to you on how you test and finally publish it. Simo’s article has a good section on testing.

Adding GA4 definitions (optional)

This solution does not need you to add these parameters as definitions in the GA4 admin. However, defining them lets you directly report on them in GA4 and when directly connecting to Looker Studio. It also gives me an opportunity to briefly explain what they do.

Clicking on a cell will copy its content to your clipboard.

Dimension NameScopeDescriptionEvent Parameter
Web Vitals metric IDEventThis is used to group web vitals that happen in the same page viewmetric_id
Debug targetEventThis identifies the selector path to the element that contributed most to the metric. debug_target
Debug timingEventFor FID events, it indicates if the event was before ‘pre_dcl’ or after ‘post_dcl’ the content was loadeddebug_timing
Event timeEventThe time when the Web Vitals event happenedevent_time
Web Vitals ratingEvent‘good’, ‘needs-improvement’ or ‘poor’. Based on the scores set by web.dev. Used in the LCP, CLS, FID eventsmetric_rating
Page typeEventUsed to identify the type of page (e.g. template name) for segmenting page-based reportspage_type
Effective connection typeEventUses the Network Information API to get the effective connection speed of the user. 4g, 3g, 2g, slow-2g and unknowneffective_connection_type
Save dataEventIf a user has requested that they want to save data. e.g. They may be on a slow or expensive network.save_data
Metric NameScopeDescriptionEvent ParameterUnit of measure
Web Vitals valueEventThe value of the web vital. Used in the LCP, FID and CLS events.metric_valueStandard
Web Vitals deltaEventThe difference since the last report for this web vital. ‘value’ is also set to the deltametric_deltaStandard

Connect to BigQuery

web.dev and I went the route of using BigQuery for our reports. The reason for this is the limitations of directly connecting GA4 to Looker Studio.

  1. The mapping of custom GA4 properties to Looker Studio fields is not reliable. Changing the data source causes the fields to shuffle around in the report. Hopefully, they will fix this at some point.
  2. Looker Studio has no mechanism to group the Web Vitals events by page view to determine the final value for a Web Vital. We use BigQuery to pre-group the data for us.

First, you want to set up a Google Cloud account and create a project to connect GA4 to.

Then go into the GA4 admin, select ‘BigQuery Links’ and click on the Link button. You will then be able to select your new project and complete the linking process. You need to enable Daily export.

GA4 Admin BigQuery Linking
Connecting GA4 to BigQuery

Once complete, it will take about 24 hours before your first GA4 export table is created.

You can check the project by selecting the project in your Google Cloud account and then selecting BigQuery from the sidebar. You should see the project listed. Once the first export is complete, you will be able to expand it to see the exported tables. This is what mine looks like once the table was created.

BigQuery GA4 Export
A BigQuery project containing a GA4 exported event table.

Time for a break. See you tomorrow…

 Sunset in Glenelg
Glenelg beach near Adelaide, Australia. A great place to unwind after a BigQuery day

Create a BigQuery materialised table

Morning.

After the first events table is exported, we can move to the next step. We need to convert the data so that it can be easily used by Looker Studio. As mentioned before, the main task is to work out the final Web Vitals scores for each page view so that Looker Studio can deal with it.

We perform this conversion by adding an SQL query that creates a new table with the data we need. The query we use is based on the one documented by web.dev with a few extras added so that it can support my report.

If you’ve already created the materialised table as per web.devs instructions, you will need to add the ‘Tony’s additions’ sections and alter ‘Tony’s modification’ lines to that SQL for my report to work.

Otherwise, click the ‘compose new query’ button and add the following SQL to the editor. You will need to edit all occurrences of DatasetID to make it use your project and dataset.

Once you have edited it, you can run it for the first time. If it does not work, you may have not got your table names right.

When it works, it will create a table called ‘web_vitals_summary’ in the same place as your GA4 export. We’re going to reference that later.

BigQuery Materialised Table
Just after running the materialised table SQL script

Save the query as ‘Web Vitals Summary’ so you can re-run it later. You can find saved queries via an option in the footer.

Creating this materialised table not only makes it easier to create reports, but it also may save you money. Queries cost money once you’ve used up your free monthly allowance of 1 Terabyte. This table reduces the size of the queries you make to BigQuery when viewing reports, making them faster and cheaper.

Schedule updates of the materialised table (optional)

At the moment, you would need to periodically re-run the query to get their latest data into your report. You may want to leave this as a manual process if you are hitting the free monthly allowance. If not, you can set the query up to be scheduled every day.

To do that, when editing the query, you will see a Schedule option at the top. Create one and name it ‘Web Vitals Summary’. You may have to enable the API, refresh the page, log in and maybe go back to the saved query to get it to work (a bit flaky is this one). You can find scheduled queries from the expandable sidebar.

GA4 does not seem to be consistent in the time it does its daily export, so you may have to live with a day or two lag in the report (or run it manually as needed)

Make your own copy of the Core Web Vitals report

Almost there. We just need to create a copy of the report that uses your table.

Open the Core Web Vitals report and select “make a copy”.

At this point you are prompted to update the data sources for the new report. In most cases you want to update the first datasource (BigQuery blue icon) to point to your table. The rest can be left as they are. Click on the “new data source” and select BigQuery. Then navigate to your table. Check the “Use event_timestamp as date range dimension”. Connect, Add to Report, Copy Report and your there.

Then rename your new report so that it identifies your store.

You can then publish and share the report so others can see it.

Using the reports

Over time more daily data will be imported from GA4, making the time-based reports great for tracking progress.

Browse through the different pages in the report via the left-side menu.

The dropdown filters at the top affect the whole report. Why not tunnel down to a specific location, browser, page type or even if the user was engaged or not.

Core Web Vitals Filter
Looks like my web application pages just pass LCP for engaged desktop Chrome users in Russia. Phew!

Try clicking on table rows. Many will further filter down the data for that page.

Page Type Table Filter
Yep, some of my web application pages have an LCP issue. Definitely need to look into that.

Page URLs and the PSI (Page Speed Insights) columns are links.

I think there may be some interesting data to come from the distribution reports with regard to engagement. I’m already seeing longer LCP values causing a reduction in engagement with mobile users.

LCP Engagement
Engagement rates drop as LCP increases.

This is your own copy of the report. Feel free to edit it to make it fit your needs. e.g. Changing the above chart to a 100% stacked chart will probably work well once there is enough data. If you improve it or add something great…

Feedback

Please send me any feedback you have. Ideas, issues, or just to show off your report scores. My @TonyMcCreath Twitter account is a good place to do that. I’ll be waiting.

Tony Waiting
What should I do next?

64 Responses

  1. Hi
    Thanks for sharing this complete guide for checking coreWebVitals, I think there is a mismatch in GTM, in the parameters table.
    value : {{DLV – webVitalsMeasurement.delta}}

    1. Hi Amin, could you clarify what this missmatch is? My solution does not use the delta, so it would have not been tested!

  2. Thanks for the writeup! This topic is going to be very important in the next several months. I hope you continue to share more on GA4, CWV, and BigQuery!

    1. It is possible, in fact the webvitals script guide includes examples for GA. I chose GA4 because it has a richer event model that makes it easier for me to send many different bits of data in one event.

  3. Hello, Tony.

    Using Simo’s template and running queries from web.dev article won’t work, right?

    This is the GA4 event https://ibb.co/Z1BpVnf
    This is what I have in GA4 https://ibb.co/jWBLvYw
    This is in GCP – BigQuery https://ibb.co/3TMnVp0

    With this query
    SELECT * FROM `my_project_id.analytics_XXXXX.events_*`
    WHERE event_name IN (‘LCP’, ‘FID’, ‘CLS’)
    I got this https://ibb.co/XkzTM4v

    And using this query https://web.dev/vitals-ga4/#lcp-fid-and-cls-at-the-75percent-percentile-(p75)-across-the-whole-site
    I got this https://ibb.co/d7M69zs – no FID, no LCP, just 1 CLS

    Seems that I’m not getting correct data… Any idea how to fix this?

    Thanks!

    1. I think you need to change the parameter names to match up. Your using web_vitals_measurement_value while the web.dev query (and my queries) use metric_value. The web.dev query also requires a metric_id parameter.

  4. Hi,
    I used Simo template lately and I try to use your solution (especially beacause of nice Data Studio presentation and GA4 posibilities).
    BUT – your Data Layer pushes works only when I have two tags active – your custom html and Simo Core Web Vitals template. When I paused Simo tag, then coreWebVitals are not visible on dataLayer and I have error in my console “Uncaught ReferenceError: webVitals is not defined at HTMLScriptElement.a.onload (:1:109)”
    Should I keep two tags, whether it will not negatively affect the data, e.g. duplication?

    1. It sounds like my Core Web Vitals tag is not running or something is trying to use it before it loads the script that creates the webVitals object.

      I don’t think having both tags running is a good idea. As in the previous comment, you could make Simo’s template work by altering a few parameter names.

    1. I would presume so. You would lose data for people who did not accept.

      I think the script will still work once run and gather the same results.

      You could speed things up by preloading any scripts ready for their use on acceptance.

  5. Hello Tony,

    First, thank you for sharing Simo’s template, this is very helpful to see what causes low load.
    I have used it for three months, but it suddenly stops collect data on 1st Sep.
    There is an error occurred on the code of step “Adding the code to your site (for GA4 using GTM)”, the Exception is:
    ReferenceError: webVitals is not defined at a.onload (:1:109)

    Does someone else meet the issue too? Any idea how to fix this? I need help so much.

  6. This link above mentioned is for the script tag in GTM right? Thank you for this guide!

    I’m just wondering.. If everything seems to be connected right but “There is no data to display.” is shown in BigQuery and “Invalid/Missing dimensions, metrics, or filters.” & “Data Studio has encountered a system error.” Is this possibly because there is no data exported yet from GA4?

    1. Which link?

      It takes about a day before the GA4 creates the required table in BigQuery. And DataStudio can show that sort of error if there is no data yet.

      I have just updated all the code and the article, so there may be bugs introduced. But this does sound like the data needs to get in.

      I just noticed that the embedded code has not updated. Click on the link at the end of the embedded codes to get the latest version. I’ll work out why they are out of date.

  7. Hi – I set this up earlier in the year and it’s been running perfectly, but it stopped working in mid-August. I saw your note at the top and updated the script I have in GTM with one that is currently posted, but I am not seeing any of the Core Web Vital events in the DataLayer. In the console I am seeing an error
    “Uncaught ReferenceError: sendToGoogleAnalytics is not defined at a.onload”
    I think lines 49-54 need to be switched from “sendToGoogleAnalytics” to “sendToDataLayer” to align with line 6?

    After making that change the error is now gone, but a new error has cropped up in the console
    “Uncaught ReferenceError: debugTarget is not defined
    at sendToDataLayer
    at web-vitals.attribution.iife.js:1:9732
    at web-vitals.attribution.iife.js:1:2022
    at web-vitals.attribution.iife.js:1:5966”
    Not sure how to resolve this one, so I am hoping you might be able to help.

    This has been incredibly useful to use so I really want to get it up and running again, thanks!

  8. Hi Tony,

    thank you so much, this is sooo helpful!

    I also almost got it working ;).

    I believe you make use of quite a few custom parameters and fields in Google Data Studio, correct?

    So when creating a new data source based on the BigQuery table it doesn’t have those. Do you have definitions of those fields somewhere so I can recreate them? That would be fantastic!

  9. Hi David,

    When you copy the report and select your BigQuery table it should create a new dataSource for you with the required fields.

    I have updated things a lot lately, so there may be some glitches in the process.

    If you’re happy to give me access ([email protected]) to your copy of the report I can take a look.

  10. I copy and pasted the core web vitals code in a custom HTML tag in GTM. In preview mode, I get “debug_target” with an undefined value. What could be the problem? Thanks.

    1. Hm, the logic should not let that happen. It should be an empty string or ‘(not set)’ if everything was undefined. I’ll check my test environment.

    2. It’s working for me. Note that not all metrics have all the properties. debug_target is only for LCP and CLS. In other cases it will be blank.

    3. Hi Simon, I had the same problem. It seems to me that the guide has a little error at the end of the “Adding the code to your site (for GA4 using GTM)” paragraph, in the table with columns:”Variable name” and “Data Layer Variable Name”

      DLV – webVitalsMeasurement.debugTarget webVitalsMeasurement.debug_target (not “debugTarget”)

      DLV – webVitalsMeasurement.debugEvent webVitalsMeasurement.debug_event (not “debugEvent”)

      DLV – webVitalsMeasurement.debugTiming webVitalsMeasurement.debug_timing (not “debugTiming”)

      DLV – webVitalsMeasurement.eventTime webVitalsMeasurement.event_time (not “eventTime”)

      1. Nice spot Lox,

        I think this bug was introduced when I updated the web dev library it uses. A bit of copying from the gtag code where the property naming convention is different. That change also removed the need to create a debugTarget2.

        I’ve updated the code to generate the right properties and removed the documentation on debugTarget2. Hopefully, that fixes things.

        Your suggested changes would also work if the code is left at version 3.1

  11. Hi Tony
    Thank you for a great tutorial and helping us understand our end users real experience on our site.

    I have implemented your script and unfortunately my conclusion so far is that the script drops our Pagespeed Performance score by 10-15, which is a really big hit and makes it almost impossible to defend. Ironically we’re doing this do enhance our performance but this is dragging too much in the opposite direction for use to implement.

    Do you have any advice on how to make this perform better?

    1. Hi Christoffer,

      I’m surprised it causes such a points change. It includes one small asynchronous file (3.8k), some inline code and some extra calls to GA4. I’d be interested to see its reasons for those points losses?

      If you are using GTM and specifically added that to support this, then you would be adding more of an overhead. Same if you added gtag and GA4 just for this.

      I don’t chase PSI points, as in many cases they do not relate to the user’s experience. That is the point of the Core Web Vitals, which focus on what affects the user. A 3.8k asynchronous file will not.

      1. Hi Tony and thanks for the quick response.

        I totally agree with the statement that real user data is the real data to chase but we are always testing our new implementations upfront with PSI to test if any new code might impact our users’ experience.

        I looked a bit more into it and it seems to be the datalayer push that causes a hit. I’ve tried to move it to another event than script.load but I dont think it has any impact.
        I will try and debug further.

        Another question/request;
        Some of our debug targets are more than 100 characters long and its being truncated in the lookerstudio report. I can see a comment in your BigQuery statement fixing the issue but it doesn’t seem to work. Anything else I can do?

        1. It would be interesting to see if you can reduce some of those points. Could you confirm if you are using GTM, and if you added GTM just for this tracking? And if you added GA4 tracking just for this.

          If you privately share your site, I can take a look.

          The latest web-vitals script truncates the selector to 100 characters, and we currently have no control over it. But it is smarter than previous versions as it tries to end up with a working selector that is under 100 characters. The previous code just cut things off.

          This means my BigQuery debug_target2 field trick is of no value with the latest script. I’ve kept it in to avoid breaking reports.

          1. I’ve done some more debugging and it seems I can reduce the impact on the PSI performance score by setting the script tag to either async or defer (I havent noticed a significant difference between those 2):

            script.defer=”true”

            Another thing that seemed to help was moving the datalayer push to the documents “load”-event rather than the script’s load event. This will make sure that the push to the datalayer is happening after everything else is loaded:

            “The load event is fired when the whole page has loaded, including all dependent resources such as stylesheets, scripts, iframes, and images.”
            https://developer.mozilla.org/en-US/docs/Web/API/Window/load_event

            The approach by delaying both the script execution and pushing to the datalayer is also backed by the web-vitals docs:

            “The web-vitals library uses the buffered flag for PerformanceObserver, allowing it to access performance entries that occurred before the library was loaded.

            This means you do not need to load this library early in order to get accurate performance data. In general, this library should be deferred until after other user-impacting code has loaded.”
            https://github.com/GoogleChrome/web-vitals#install-and-load-the-library

            Regarding the truncated debut_target I actually created web-vitals ticket that was quickly answered with a link to a previous comment by you (an ironic full circle!). It seems that you got it to work satisfactorily? Something about the target being truncated from left to right instead of the opposite?

  12. I was under the impression that scripts added via JavaScript are async. Maybe specifically stating it lowers their priority.

    defer does not delay loading (besides lowering their priority), but may delay execution until just before DCL. If DCL is late then it would delay things a bit more.

    Delaying the adding of the script until the document load event will definitely push things back. Using defer at that time is probably mute as the load event is normally after DCL. The risk here is if the user leaves before the load event is fired.

    When I have time I will also test this and update the article. I’d be interested in what points change by configuring things differently.

    I saw the ticket but did not realise it was you. The code to create the selector builds it from the most specific element first. When a new part is going to make it exceed 100 characters, it stops. So it always builds a valid and hopefully very specific selector.

    1. Hi Tony
      Looking forward to hearing about your results.

      We are starting to use your lookerstudio template which is awesome but we have issues with the page crashing constantly in Chrome. Do you have any methods to debug why this is happening?
      We can only browse 1-2 pages before it crashes

    2. Hi again Tony
      It seems to be specific pages that crash and they also crash in edit-mode so I have no chance of debugging the setup of the page that doesnt work.
      However I noticed that some of the data sources are unknown when copying your report. You mention that we should connect to our BugQuery table but what about these?
      Web Vital Distrubutions – TTFB
      Web Vital Distrubutions – FCP
      Web Vital Distrubutions – INP
      Web Vital Distrubutions – LCP
      Web Vital Distrubutions – FID
      Web Vital Distrubutions – CLS
      (Screenshot here: https://prnt.sc/ON-JuD599Enn)

      In the tutorial its said to be left “as they are” but I can see the references in my copy and that they aren’t working (ie. on LCP Distribution, which is one of the few pages thats working). It seems that they link to some Google Sheet that I dont know about.

    3. Okay, sorry for spamming but I now seem to have fixed the crash issue but there’s still one thing that I can’t figure out.
      It seems that the Blended data used in the Distribution and Engagement pages doesn’t work when copying.
      In “Table 1” all 4 dimensions are “Invalid dimension” and I can’t figure out how to set them correctly.

      I would really appreciate any help as this seems to be the last hurdle before I’m fully set up with your awesome guide.

      (You’re very welcome to reach out to my email, if you can see it on my comment, and I can give you access to our report for debugging)

  13. Hi Tony

    Thank you very much for the resource, it is very well explained and very interesting! It saved my a huge amount of time.

    I am adding some trouble I had, in case someone else finds the same issues:

    – When I copied the Looker Studio report and connected it to my data source, all widgets went blank, displaying an error “See Details.” It appears that this was due to the filters not populating properly on my dashboard. To resolve this, I just had to go to (Resource > Filters) and fix the field names.
    Also, in the widgets, “metric_value” and some other fields were replaced with an expression like “t0.fx_metric_value.” Again, I simply needed to change this expression to the correct “metric_value.”

    – I was unable to retrieve “webVitalsMeasurement.rating” on my webpage, but it seems to be just a classification of the metrics based on the thresholds. So, I added the threshold logic for each metric to the provided query. (Example for FID – longer query for each metric)
    CASE WHEN event_name = ‘FID’ THEN
    CASE
    WHEN metric_value >= 0 AND metric_value 100 AND metric_value <= 300 THEN 'Needs Improvement'
    ELSE 'Poor'
    END

    It's likely that I didn't reproduce a step as I should have, but anyway, I was able to fix it quickly.
    I hope this helps anyone who might face similar issues!

    Thanks again for the amazing content!

    1. Thanks for the information, it should help others.

      I saw that problem a lot when directly connecting Looker Studio to GA4. In that scenario, the order you add the dimensions affects how they map in the dataSource.

      I’ve not seen that same issue when using BigQuery, but I have seen that adding a single field to a BigQuery table can break things.

      Unfortunately, Looker Studio is a bit fragile.

  14. Hi Tony,

    Can we grab the FID cause element, such as which user action caused the FID (for example, clicking on a menu or icon), in the same manner we grab the LCP and CLS causes? Could you kindly assist me if there is a way?

  15. Hi Tony,

    Thanks for the quick update. Yeah, FID causes were added to page, and it is still showing the LCP elements in the FID causes as well. Currently,  our site scores are affected by FID, which is why we want to get the element details to see what exactly is causing the issue.
    Ex:

    Main Element Selector FID Page views▼ LCP
    #iv-irol>div.iv-mouse-capture-overlay 19.8 148 19.8
    #ContentPane>article.article 292.3 94 292.3

  16. Hello Tony,

    Is there a method to identify the JS file that is preventing the interaction event and causing FID?

    Example: While the user is clicking the menu, let’s say Google Tag Manager or other third-party or first-party JS is running and it is blocking the interaction event. If so, can we pull that JS into the RUM dashboard?

    1. That info is not in the event that the web-vitals library fires, so I think it would be hard to get hold of it. When they fix the identifying of the element involved, we would at least have a clue on what the interaction was.

  17. Hi Tony,

    I landed here while searching for a solution to “Discover what your real users are experiencing – No Data” on Pagespeed Insights report. My website’s report does not show any data for this. If I add the code from your tutorial above, will the data show in PSI report?

    1. Unfortunately not. PSI uses either their own lab data (a live test), or data from CrUX which is Chrome users that have opted in for Core Web Vitals data gathering. This article is about gathering your own user data into GA4 and BigQuery, and from that create your own reports.

  18. Hi Tony,

    How can i migrate/upgrade the RUM dashboard(looker studio) with the newly added metrics like INP, the current big query is not having those records, do i need to create all together a new analytics id, or can i add the new fields and update to the new looker studio, possible please provide any reference

  19. Hi Tony,

    Can we upgrade our existing dashboard to v3.4, is there a way to upgrade directly by pointing the data source.

  20. Hi!
    The first thing I want to say is wow, thanks for the material!

    I set up my WordPress site and data collection by analytics and transmission to BigQuery through Simo material. Thanks to your article, I executed queries inside BigQuery by changing the values of some variables.

    How to achieve such a beautiful representation in the form of graphs, as in your screenshots? Is there a manual? It looks really cool!

    1. The article explains how to generate the report in the section titled “Make your own copy of the Core Web Vitals report”

  21. Hi Tony,

    I’ve been scouring the internet to create the ideal Looker Studio dashboard for CVW and stumbled upon your article. I was particularly impressed by the specialized debugging script you discussed.

    I’ve recently integrated ‘Performance_timing Scripts’ from “Thyngster” and ‘Core Web Vitals’ via the GTM template by Simo Ahava. Both sets of data are being sent to GA4 without any issues. Would it be possible to only implement the final scripts from your article to visualize this data in Looker Studio through BigQuery?

  22. Hi,
    thanks for your work here, very useful.
    I am not seeing the CLS cause anymore. Any idea why that is?
    Keep up the good work please!
    Jan

    1. The data is being sent. Maybe the Looker Studio report got corrupted. I’ve seen it alter fields at times. Re creating it may fix the issue.

  23. Thank you for a great tool.
    After using it, I was wondering, is it possible to add Causes in INP as well as in CLS?
    I would like to use it to identify problematic pages and targets.
    Maybe I can adjust it myself, but I am a beginner and cannot use LookerStudio well.

    1. INP causes data should be captured, but the report does not include a section on it yet.

      Unfortunately, I don’t have the time at the moment to update the report. It has been on my list for a while.

  24. Hi, great work, thank you very much I was able to set everything up nicely!

    The only question I have is the following: since the query is creating a table every time, BigQuery does not allow to schedule it. Also, it seems not optimal to re-query all of the available data every time, when we only need to update the previous day. Is it possible to somehow adjust the query so that it is rather updating the table every day? Thank you!

    1. My commercial solution for BigCommerce uses incremental tables. Each day, it re-writes just the last 3 days of data and leaves the older data intact. This saves a lot on the price of the query. The steps:

      Pick a date to replace the data from. Say 3 days ago.
      DELETE rows after that date.
      INSERT rows based on GA4 data after that date.

Leave a Reply

Your email address will not be published. Required fields are marked *