A while ago, I was on a walk with my wife (a professor of nursing) and talking through an issue I was having at work. As I was describing my process of tracking down web performance issues, she mentioned that it sounded like the nursing process she teaches her students.
The nursing process is a systematic framework that involves five essential steps: assessment, diagnosis, planning, implementation, and evaluation. While these principles are designed for healthcare, they can be effectively adapted to fields like web performance optimization.
In this context, the "patient" is the website or application, and the goal is to ensure it performs optimally to meet user needs. Formalizing the process like this also helps serve as a guide to see where the developers tasked with executing an optimization effort might need support.
Assessment
In nursing, assessment involves gathering comprehensive data about a patient's health. Similarly, the first step in web performance optimization is assessing the site’s health in terms of its performance. This data falls into two categories, lab data and field data, and there are notable differences between the two.
Lab data uses tools like WebPageTest, Google Lighthouse, PageSpeed Insights, or GTMetrix to execute tests in repeatable conditions, such as the same device, geography, and network profiles on every run of the test. The data collection window for a lab test is very short, often under a minute. Because the conditions are repeatable, it offers an apples-to-apples view of performance, and since you don't need data from real users, this type of testing can be integrated into CI/CD, triggered by web hooks, or run on staging servers.
The other type of data is field data. Field data is collected from real users on the production site and accurately paints a picture that a site or application's user base experiences in terms of performance. The data will contain a wide range of devices, connection speeds, locations, and conditions. It has many data points, and it requires a longer timeframe to collect enough data for it to be meaningful. Vendors in this space include Chrome UX Report (CrUX), Akamai mPulse, SpeedCurve RUM, Datadog, New Relic, RUMVision, and Dynatrace. You can also roll your own solution using web-vitals.js or Boomerang.
The field vs. lab data is not a one or the other decision. They are complementary and you should use both if you're able.
Metrics
You can't improve what you don't measure, but the metrics you collect are up to you. I have written about Google's Core Web Vitals in numerous other articles, and those are a great place to start.
If you have a Single Page Application, Google is currently refining their metrics so they work with the soft navigations that are implemented with client-side routing, but in the meantime, Measuring Performance of Single Page Apps by Erik Witt has been a source that I have referred to often.
You can also collect your own custom metrics and send them to the endpoint of your choosing. There are several APIs that are widely available to do this:
Diagnosis
After the assessment phase, a nurse develops a diagnosis based on the identified health issues. Similarly, in web performance optimization, the diagnosis phase involves interpreting the data collected during the assessment phase to identify performance bottlenecks.
There are many reasons why a site or application might be slow, and the diagnostic work sets out to identify the root cause of performance issues so there is a clear target. The tools of the trade here are lots of charts and graphs, some of which require technical expertise to interpret. Some tools, like Lighthouse, try to abstract away the technical part of the diagnosis and move straight into recommendations.
For metrics like Largest Contentful Paint (LCP) and Interaction to Next Paint (INP), remember that these can be divided further to help diagnose the root cause. LCP can be subdivided into Time to First Byte (TTFB), First Contentful Paint (FCP), and LCP, while INP can be subdivided into input delay, processing duration, and presentation delay.
Planning
Once a diagnosis is made, nurses develop a care plan with measurable outcomes. In web performance optimization, this step involves planning a strategy for addressing the identified bottlenecks and setting specific performance goals. The diagnosis portion might reveal multiple issues, and if so, prioritization is necessary to address the most important ones first.
The plan should prioritize actions based on their impact and feasibility. For example, if a metric is far outside the desired range for a large number of visitors and the remediation steps are relatively low-effort, that's a prime candidate to start with. If the issue affects many page templates or app views, such as problems with globally included code, that also presents a good opportunity for improvement.
If you work somewhere that doesn't have an established performance culture and you want to instill one, planning becomes more challenging. Take it from someone who has been there: a quick win is very beneficial for making allies with people who have influence. This can help overcome organizational challenges where there are competing priorities. The good news is that these types of organizations usually have plenty of candidates for quick performance wins.
The planning phase involves what will be treated and also how that treatment will be administered.
Implementation
In the implementation phase, nurses administer treatment or interventions as outlined in the care plan. In web performance optimization, implementation involves executing the planned optimizations. The scope of corrective measures depends entirely on the plan created in the previous step, and there are countless reasons for and methods of implementing corrections.
At this point in the process, the developer is in the thick of the action. Each intervention should be systematically implemented, tested, and monitored to ensure the site is moving toward the planned performance goals.
Implementation might also involve preventative measures and safeguards like performance budgets.
As fixes are implemented, it's important to check your work early and often. If your plan involves six corrective measures, validate each fix after it is applied, rather than waiting until the end to validate everything at once. This is where lab testing is incredibly valuable due to its short feedback loop.
Evaluation
Evaluation is the final step in the nursing process, where a nurse assesses whether the interventions achieved the desired patient outcomes. In WPO, this involves monitoring the impact of optimizations on key performance metrics. The evaluation phase hopes to answer questions like:
- Did the metrics improve as expected?
- Are the business metrics that are affected by performance moving in the direction we want?
- Are there any new issues that have emerged as a result of the changes?
Synthetic tools that integrate with CI or operate on a schedule can help continuously monitor performance. If the desired results aren’t achieved, the process loops back to the assessment phase to reanalyze and diagnose any lingering or new issues. This iterative process of optimization mirrors the continuous care cycle in nursing.
Conclusion
Applying the nursing process to web performance optimization provides a structured, systematic approach to solving performance issues. Just as nurses rely on careful assessment, diagnosis, planning, implementation, and evaluation to care for patients, developers can use the same framework to improve the critical performance metrics that correlate to user experience.