Reflections on the Behavioral Aspects of Web Performance

A reflection after 6 months of performance work

There are numerous resources related to the technical aspects of web performance optimization, but sometimes the behavioral side of the issue gets neglected.

I wanted to reflect on some dedicated performance work I have done within the last 6 months to see what has really made it successful.

Environment

For context, I work in publishing on a very small engineering team, and our sites compete with sites from much larger organizations with more engineering staff, deeper pockets, and who sometimes have whole teams dedicated to the performance of the sites in their portfolio.

Selling and getting buy-in

Without buy-in, performance work is doomed from the start, and it will get pushed aside for new feature development or other requests.

In my case, the buy-in came through an analysis of our competitors. I got a list of competing sites from our marketing staff and ran an analysis of a selected page on each site with Google Lighthouse on the command line.

Each run gets saved to a SQLite database so I can easily query the data.

I used Core Web Vitals (CWV) as the basis of the report because SEO is important to our business. Since the CWV metrics originated at Google, they have some weight behind them.

In some cases, the performance numbers on my company's sites were in the middle of the pack, but in a few cases, they were at the bottom. Once I showed the results to my CEO, she asked me to knock out some low-hanging fruit.

Be bold

This was the most difficult part for me. I'm outgoing, but as a developer, I didn't really have a lot of visibility in the company. To get the project off the ground, I had a 1-on-1 call with the CEO, developed tooling with 0 budget, and got information from people who only knew my name previously as a task assignee in Jira.

This was difficult, and keeping it going is difficult too.

It's almost like running a consultancy within a business. I have to tout my accomplishments, advocate for things that I know will result in an awesome product for our users, and sell my "product."

Celebrate wins

My teammates are probably tired of me talking about performance on our dev calls, but to really elevate performance and the culture, I needed to make it more visible on an organizational level.

Every so often, I started sharing major improvements on the company Slack channel. These would range from side-by-side videos of before-and-after performance work I took from WebPageTest to data from the excellent Treo Site Speed report with a friendly explanation of what the graphs mean so people could understand what they were looking at. Sometimes my updates got lost in the chatter of other things, but other times, there was interest in what I was doing.

In some instances, I scream it into the void on Twitter to my tens of dedicated followers.

As I was writing this, I decided to share a post (maybe my first ever) on LinkedIn. The post I shared was something our product developer and I have been making noise about for years, and it finally came to fruition.

Let people know what's holding you back

A lot of performance work doesn't require UI changes, but there are some things that are better served being reworked from the ground up.

If you have a feature that creates a performance bottleneck, you might want to go into data gathering mode to see how often its used and how it provides value. In some cases, you might need to get designers, stakeholders, and project managers involved to reconceptualize a feature.

If you work at a company who has a bias toward new feature development as opposed to improving features that were already written, you might need to be relentless about bringing these issues up so they get attention.

Be realistic

Working on a small team, I didn't want to introduce too many new things at once to keep the technical debt in check. Any new tool I write seems to end up being mine to maintain forever, and admittedly, this influences my decisions since my time is finite. I'm generally ok with that, but it does make me weigh the cost/benefit ratio quite heavily.

I also wanted to be realistic in what I wanted to achieve. For example, most of the sites I work are related to hobby interests, and during 2020, traffic was at an all-time high because people needed to do something to stay entertained. There's no chasing those traffic numbers now that the world has adapted to its current situation.

Instead, I opted to narrow the gap between my company's sites and their competitors in terms of performance, and try to overtake a few where possible.

Limiting scope

It's tempting to fix a bunch of things all at once, but it's best to be single-minded. If you are working on a specific issue and notice an easy-but-unrelated issue, resist the temptation to fix everything simultaneously.

Instead, make fixing the small issue a separate task and come back to it later. This will give you a cleaner change set when you commit your code and makes QA easier to verify.

Validate everything

One of the quotes that circulates in the performance community is "you can't improve what you don't measure." It's also important to keep an eye on these measurements to make sure that your improvements are actually working as designed.

There was an instance recently when I was trying to improve the cache hit rate on our CDN where I actually made things worse because I was clearing cache too aggressively when new content was generated.

Because I understood what to measure, I was able to catch the issue right away and did some additional development to clear cache more selectively. Had I not been monitoring the data to validate the fix, the regression would have gone unnoticed.

Alerts are also helpful, but there's a concept in nursing called alarm fatigue, and that applies here as well. Too many email alerts about application health leads to missing the important signals, so I tend to set alerts on a few critical areas to keep the chatter to a minimum.

Consume information voraciously

I have worked as a developer since 2005. Needless to say, the web is completely different today, and continuing education has been essential in order to stay relevant.

I spend a lot of time talking to people on Twitter, reading about new browser APIs that replace things that previously required dependencies, learning about more efficient ways to write CSS, and watching conference talks or streams.

I'm constantly picking up new information, recognizing patterns quicker, and digging deeper into the tools I use. The more of this you can do, the more future-proof your career is.

I'm motivated to commit time to it because performance is interesting to me. There is a detective element to it, and you really need to understand how a browser works more than you would to write a working app.

There is still a long list of things I want to do related to performance, but these behavioral elements are as much a part of performance as the technical work.

Results

If you're interested in the numbers, here are the performance improvements that have been made to date for the main sites in my company's portfolio - tested on a Moto G4 over a fast 3G connection:

Site 1

TTFB FCP LCP CLS
Before 0.698s 2.464s 8.434s 0.071
After 0.694s 2.124s 2.479s 0.034

Site 2

TTFB FCP LCP CLS
Before 1.012s 3.664s 6.065s 0.016
After 0.708s 2.233s 2.783s 0.018

Site 3

TTFB FCP LCP CLS
Before 0.686s 3.130s 3.130s 0.171
After 0.732s 2.191s 2.313s 0.049

Site 4

TTFB FCP LCP CLS
Before 1.887s 4.127s 4.126s 0.029
After 0.744s 1.875s 2.229s 0.004

Site 5

TTFB FCP LCP CLS
Before 0.930s 4.215s 5.333s 0.192
After 0.754s 1.962s 2.906s 0.001