What is Digital Experience Monitoring?
The architecture of application infrastructure is evolving. As with any disruption, IT managers and engineers need to juggle the tradeoffs.
As infrastructure hardware is abstracted, retaining the same level of observability available in traditional data centers becomes difficult. The tradeoff between complexity and observability is most pronounced in public cloud computing.
Public cloud infrastructure drastically reduces the burden of managing your own infrastructure. At the same time, the downside is reduced control over the systems you use and, therefore, lower visibility into usage.
Fortunately, modern system observability techniques such as Digital Experience Monitoring (DEM) mitigate this tradeoff. As defined in Gartner’s Market Guide for DEM,
“Digital experience monitoring (DEM) is a performance analysis discipline that supports the optimization of the operational experience and behavior of a digital agent, human or machine, with the application and service portfolio of enterprises. These users, human or digital, can be a mix of external users outside the firewall and inside it. This discipline also seeks to observe and model the behavior of users as a flow of interactions in the form of a customer journey.”
Another way to think about DEM is that the actions of the users are observable via the frontend user interfaces and the backend APIs, as the transactions they initiate flow through the systems. More importantly, what’s measured is the quality of their experience. By understanding the friction points in the overall user journey, business owners can deliver a better customer experience and drive more profitable business outcomes.
{{banner-20="/design/banners"}}
Types of Digital Experience Monitoring
There are two main types of technology that can support DEM:
Both of these monitoring techniques are designed to work with applications hosted in a public cloud with complex transaction journeys. It’s important to incorporate multiple DEM strategies to ensure applications are running as intended and optimized for user experience.
Synthetic monitoring
Synthetic monitoring, or active monitoring, uses technology to emulate the journey a user might take when interacting with an application. Scripts are deployed to generate multiple permutations of random paths that cover a large breadth of user interfacing scenarios.
These paths vary across many details like browser type, user locales, different journeys (i.e., completing a form or completing a purchase pipeline), latency, and many other variables. This is important because all user scenarios are hard to catch in development, even with extensive end-to-end testing and integration tests.
Synthetic monitoring solutions are automatic and can give insight into application performance across different user scenarios. This is especially useful for ensuring errors in business transactions are caught before deployment. For example, within a checkout process, it can catch if a type of payment method fails from a bad API call or invalid database transaction because it will generate a scenario in which these different paths are tested.
{{banner-21="/design/banners"}}
Real User Monitoring
Often contrasted with Synthetic Monitoring, RUM is another DEM solution. Instead of mocking user interactions, RUM tracks actual user activity across an application.
RUM solutions often include a dashboard to visualize in real-time the number of user sessions an application is supporting, a user’s path during a session, demographic data, user metadata (browser type, location, etc.), and more.
Powerful RUM solutions additionally provide various ways to analyze the data. For example, when there is a critical issue (like a server outage or unavailability of resources), RUM solutions allow administrators to pinpoint affected transactions, trace it to an origin when the issue occurred, and even show the affected users. RUM thus facilitates root cause analysis (RCA) by providing a way to monitor in real-time.
{{banner-22="/design/banners"}}
Why DEM?
DEM offers more when it comes to optimizing application performance, as compared to traditional methods like Application Performance Monitoring (APM).
DEM focuses on a more generalized area that includes IoT, software applications, networks, infrastructure, and others, emphasizing performance from the user’s point of view. On the other hand, APM covers the application performance from the software code’s perspective (how many times the code was executed, how long it took to complete, and if it ended in an error).
Although the two are different, they complement each other by providing a “big picture” view of application performance. DEM additionally contributes insight into the user’s journey and overall experience.
Components you don’t control directly, such as DNS (domain name service) and CDN (content delivery network), can affect the end-to-end digital experience. Luckily, these, too, are observable with DEM. This functionality is valuable, since an application distributed across the Internet is complex by nature, and a performance bottleneck can occur anywhere along its transaction path.
DEM improves system resiliency, prevents issues early through emulated testing, and ensures a great user experience for internal and external application interfaces.
Use cases and best practices
It’s important to know when to use synthetic monitoring and RUM. We have organized typical use cases in the table below.
Some common pitfalls to avoid are:
- You don’t need to rely on DEM alone - Although DEM solutions are needed, IT operations teams can use them in conjunction with other tools. Monitoring tools used for transaction tracing, infrastructure monitoring, database monitoring, log analysis, and application performance monitoring complement DEM. Therefore, you can diversify your overall monitoring strategy by incorporating these tools to facilitate root cause analysis.
- Plan for application changes. Synthetic monitoring is not resilient to user interface changes. A small change could break the testing harness and cause false alerts. The best practice is to include a step in the continuous delivery (CI/CD) process to validate that the changes won’t break the monitoring that’s in place.
- Create a proposal outlining your monitoring needs - DEM solutions like synthetic monitoring require planning and discussions with engineers. Don’t implement a solution without taking the time to document your organization’s requirements. Check out this feature list with the must-have synthetic monitoring features.
- Avoid adopting too many tools - Too many tools add unnecessary complexity across your environment and confuse operators. Adopting a new tool requires integrations with existing tools, operating procedures, and training of the operators. The right balance is to select only one tool for each area of functionality (such as infrastructure, database, logs, DEM, and APM). Avoid duplication of monitoring for a given component.
{{banner-sre="/design/banners"}}
Conclusion
Traditional monitoring tools cover the application code, the infrastructure, the logs, the networks, and the databases. Digital experience monitoring (DEM) complements these tools by measuring the quality of the user experience through the transactions that traverse the hardware and software components that make up the infrastructure.
DEM extends beyond to cover the critical components not directly controlled by the application operators such as DNS, CDN, and the networks operated by Internet service providers. DEM includes RUM and synthetic monitoring not only to observe the actual user sessions, but also to emulate them when the traffic is non-existent or insufficient. This allows operators to detect problems before users become aware of them.