Executive Introduction to Synthetic and Real User Monitoring
In this post we are going to provide an executive introduction to Synthetic and Real User Monitoring.
So we all want to deliver a great customer service, right? Of course we do and I think we would all agree that a fast web experience is going to play a huge role in providing a great service. Add the additional benefit of improving your search engine rankings then is fair to say, that,
“There is a need for speed”
I appreciate that statement won’t surprise anyone, after all, who wants to use a slow service but maintaining a fast site is not as easy as you might think, especially given the growing complexity of a modern, digital platform.
Clearly, in order to maintain a fast site you need to understand whether it is performing well in-line with your customers needs. Luckily, there are many approaches and techniques available to measure and monitor site performance but two clear favourites stand out, namely Synthetic and Real User Monitoring.
So, what is the best approach?
Is it using a controlled computerised synthetic script or should you adopt real user monitoring where data is collected from actual users of your website?
There is no easy answer so I thought I would put forward a summary case for each.
Real User Monitoring
Real User Monitoring or RUM records activity directly from your customers browser. It provides an insight into the actual download speed being experienced by real people accessing your site. This information naturally paints a very real picture and arms web operations with valuable information so that they can determine whether the site speed is acceptable or not.
With trend analysis web-ops can then deduce whether the site is slower than normal or faster and if necessary, take appropriate action? It also provides an opportunity to capture additional user data such as location, technology type and so on.
RUM therefore allows the web team to identify the worst performing pages and provides an opportunity to compare results based on region and technology.
- How does the download speed vary country to country?
- Does a particular technology stand out as being the most or worst performant?
All useful stuff. It is also effortless to manage. Once the snippet of javascript has been inserted into your web pages the provider, like RapidSpike, will handle the rest.
This granular view of performance from the “coalface” is useful and insightful but like with all things, it is not bullet proof.
Unfortunately RUM has one obvious downside !
Yes you have guessed it, it doesn’t work when there are no users on your site! So if a problem occurred during the night that was slowing down your site there would be no way to record this change in behaviour. Your uptime monitor would be unaware as well because technically the site is up. It might be slow but it is up and that is good enough for the uptime monitor.
Enter the Synthetic Monitor!
or as we call it Synthetic User Journey Monitoring
Before I go on let me remind you what a synthetic user journey is. In simple terms, a synthetic monitor is a computerised script designed to interact with an application to mimic a critical process. Monitoring the “Account Login” process would be a great use case for applying a synthetic monitor. A call to an API would also be another very good example.
The most obvious benefit of a synthetic monitor is that it does not require human intervention and can therefore continue work around the clock 24/7. Even when your site is quiet, the user journey will continue to do its job, providing an elegantly simple way to measure and analyse your most critical processes.
The second benefit is that the synthetic user journey monitor can record the HTTP Archive File or HAR file. This provides an opportunity to record each page element that underpins the process. This allows for greater depth of analysis so that iterative performance improvements can be made at an element level. For example slow or broken page elements can be identified automatically and flagged to the support team.
Another role of the synthetic user journey monitor is to test that the application process completes and that the speed of the process is acceptable. The real user monitor on the other hand is only really concerned with the individual page speed and doesn’t have the ability to join the dots as it were whereas the user journey does.
As you might be able to tell, I am slightly biased towards the synthetic approach as I feel the information provided is broader and more useful but I absolutely recognise the role of RUM and strongly feel there is compelling case to use both as part of a tiered performance strategy.
For example:
- Tier 1: Real User Monitoring: What is our page speed in the real world, does it vary country to country and how does speed compare browser to browser. What is the trend – are we faster than we were or slower? – Are we happy with our speed country to county, browser to browser.
- Tier 2: Synthetic User Journey Monitoring, what is our performance baseline, our are critical processes working, are there any errors and are we performant. Again what is the trend? Do we need to act or not.
- Tier 3: Uptime Monitoring – is the site up or down – last line of defense.
This approach provides a way to measure the speed of each page, a mechanism for measuring the speed of a process and finally a way to determine an outage.
Often, sites start to slow down before they crash which further strengthens the case for measuring the performance of your site. Changes in speed would be detected either by the RUM monitor or the Synthetic User Journey providing valuable time to react and head off a likely outage.
In summary, like with all IT strategies there is no single answer, a layered multi vendor approach is always going to provide the most robust solution. By adopting a layered approach you will be able to address the pros and cons of each monitoring style to hopefully ensure that your most critical assets remain highly available and performant.