Blog Post

My Most Surprising Discoveries from The SRE Report 2023

Published
November 15, 2022
#
 mins read
By 

in this blog post

I’ve had the honor and privilege of authoring The SRE Report for the last three years. For the 2023 version, this included working with some amazing individuals like Anna Jones, Kurt Andersen, and Steve McGhee.

Download The SRE Report 2023 here (no registration required).

When we release the report, we are always asked “What was most surprising to you?” And, while I’ve had to think about it in previous years, the most surprising find from the 2023 report was instantly apparent.

The biggest surprise from The SRE Report 2023 was the fierceness with which our initial beta testers automatically defended their predisposed bias/views.

For the first time in report history, we dug deeply into how roles/ranks (e.g., individual practitioners versus C-suite executives) differently answered specific opportunity or challenge questions. Now, we were not surprised there were gaps or deltas between some of the answer sets. But we were surprised by the path this initial finding took us down.

“Please rate the value received from Artificial Intelligence for IT Operations (AIOps)”

To illustrate, consider one of the reports researched topics: AIOps. We directly asked, “Please rate the value received from Artificial Intelligence for IT Operations (AIOps).” The aggregate answer alone invoked some emotion.

Please rate the value received from Artificial Intelligence for IT Operations (AIOps)

The survey had over 550 responses. The aggregate answers are above (note, I was a little surprised at the aggregate answer which, guiltily, is part of the bias problem we’ll get to in a moment).

“Which most closely describes your role?”

Where the biggest surprise started to present itself was when I broke down the data by one of the other survey questions: “Which most closely describes your role?”

“Which most closely describes your role?”
Which most closely describes your role?

Pause. Wait. What?

As you can see, there is a gap in the perceived value AIOps could potentially provide to an organization. What is more interesting, though, is the fact that the skew of the general trend (as the categories shifted from left to right) goes in completely different directions (note, the “Unsure” answer is not part of the linear trend)!

The biggest surprise of this year’s report…

This now brings us to the biggest surprise of this year: a surprise resulting from a surprise. When we started to show this, and other, data to our first round of beta testers/feedback givers, the first thing everyone of them did was to automatically start to defend their bias/perspective.

On the one hand, we heard comments like, “They don’t understand how to properly implement”. On the other hand, we heard, “They don’t understand how difficult it is to work across the organization to get the correct data intake”.

It was only when we started to ask questions like, “What should be done about this?” or, “How do different parts of the organization work to bridge this perception gap?” that we started to get past initial reactions.

percentage on how frequently SRE experts collaborate with different professional

One of the beautiful essences of writing a research paper like this is how different readers can look at the data and draw their own, sometimes different, conclusions than were written. As we discussed earlier when showing initial data to our beta testers, there will be some automatic reactions. But we sincerely hope this data will stimulate and enable newer, better, or more agile conversations, especially with executives.

Have new, better, or more agile conversations across the org structure

What do those more agile conversations look like? They…

  • Remove bias
  • Ensure a safe environment
  • Bridge the gap by discussing capabilities

First, start by removing bias. Confirmation bias, for example, is the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories. It’s possible that confirmation bias may be less pervasive if speaking from a pool of personal experience. Even then, wishful thinking or false optimism may still be unintentionally injected since previous experience(s) may have had different underlying dimensions.

Second, ensure conversations are safe; ensure messengers are not punished if they deliver contradictory or conflicting views. It’s important to be aware that any conversations across a power gradient can be fraught with risk for the “less powerful” person who will be potentially risking their career by giving bad news about a “pet project”. Consider an “agile conversation” approach, which is built around defusing psychological land mines by using conversations to foster high trust, lower fear through understanding why, make commitments, and be accountable. Overall, when reevaluating communication and feedback loops, ensure a just culture of openness, sincerity, and transparency.

Last, discuss capabilities because they are the gateway between “speeds and feeds” – on one end of the spectrum – to positive business outcomes on the other end. An executive saying, “We need to show business value” is nefarious nothingness and [probably] does not help a reliability practitioner know which speeds and feeds are important. In the other direction, though, geeking out over some amazing line of code [probably] won’t help an executive realize how that will help land new logos or retain existing customers. For example, consider a conversation around, “How do we ensure we have the ability to ensure user experiences are not disrupted when we change a component in our application or Internet stack?”

To find out some of the other surprising takeaways from the 2023 SRE Report, download it here (no registration required).

I’ve had the honor and privilege of authoring The SRE Report for the last three years. For the 2023 version, this included working with some amazing individuals like Anna Jones, Kurt Andersen, and Steve McGhee.

Download The SRE Report 2023 here (no registration required).

When we release the report, we are always asked “What was most surprising to you?” And, while I’ve had to think about it in previous years, the most surprising find from the 2023 report was instantly apparent.

The biggest surprise from The SRE Report 2023 was the fierceness with which our initial beta testers automatically defended their predisposed bias/views.

For the first time in report history, we dug deeply into how roles/ranks (e.g., individual practitioners versus C-suite executives) differently answered specific opportunity or challenge questions. Now, we were not surprised there were gaps or deltas between some of the answer sets. But we were surprised by the path this initial finding took us down.

“Please rate the value received from Artificial Intelligence for IT Operations (AIOps)”

To illustrate, consider one of the reports researched topics: AIOps. We directly asked, “Please rate the value received from Artificial Intelligence for IT Operations (AIOps).” The aggregate answer alone invoked some emotion.

Please rate the value received from Artificial Intelligence for IT Operations (AIOps)

The survey had over 550 responses. The aggregate answers are above (note, I was a little surprised at the aggregate answer which, guiltily, is part of the bias problem we’ll get to in a moment).

“Which most closely describes your role?”

Where the biggest surprise started to present itself was when I broke down the data by one of the other survey questions: “Which most closely describes your role?”

“Which most closely describes your role?”
Which most closely describes your role?

Pause. Wait. What?

As you can see, there is a gap in the perceived value AIOps could potentially provide to an organization. What is more interesting, though, is the fact that the skew of the general trend (as the categories shifted from left to right) goes in completely different directions (note, the “Unsure” answer is not part of the linear trend)!

The biggest surprise of this year’s report…

This now brings us to the biggest surprise of this year: a surprise resulting from a surprise. When we started to show this, and other, data to our first round of beta testers/feedback givers, the first thing everyone of them did was to automatically start to defend their bias/perspective.

On the one hand, we heard comments like, “They don’t understand how to properly implement”. On the other hand, we heard, “They don’t understand how difficult it is to work across the organization to get the correct data intake”.

It was only when we started to ask questions like, “What should be done about this?” or, “How do different parts of the organization work to bridge this perception gap?” that we started to get past initial reactions.

percentage on how frequently SRE experts collaborate with different professional

One of the beautiful essences of writing a research paper like this is how different readers can look at the data and draw their own, sometimes different, conclusions than were written. As we discussed earlier when showing initial data to our beta testers, there will be some automatic reactions. But we sincerely hope this data will stimulate and enable newer, better, or more agile conversations, especially with executives.

Have new, better, or more agile conversations across the org structure

What do those more agile conversations look like? They…

  • Remove bias
  • Ensure a safe environment
  • Bridge the gap by discussing capabilities

First, start by removing bias. Confirmation bias, for example, is the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories. It’s possible that confirmation bias may be less pervasive if speaking from a pool of personal experience. Even then, wishful thinking or false optimism may still be unintentionally injected since previous experience(s) may have had different underlying dimensions.

Second, ensure conversations are safe; ensure messengers are not punished if they deliver contradictory or conflicting views. It’s important to be aware that any conversations across a power gradient can be fraught with risk for the “less powerful” person who will be potentially risking their career by giving bad news about a “pet project”. Consider an “agile conversation” approach, which is built around defusing psychological land mines by using conversations to foster high trust, lower fear through understanding why, make commitments, and be accountable. Overall, when reevaluating communication and feedback loops, ensure a just culture of openness, sincerity, and transparency.

Last, discuss capabilities because they are the gateway between “speeds and feeds” – on one end of the spectrum – to positive business outcomes on the other end. An executive saying, “We need to show business value” is nefarious nothingness and [probably] does not help a reliability practitioner know which speeds and feeds are important. In the other direction, though, geeking out over some amazing line of code [probably] won’t help an executive realize how that will help land new logos or retain existing customers. For example, consider a conversation around, “How do we ensure we have the ability to ensure user experiences are not disrupted when we change a component in our application or Internet stack?”

To find out some of the other surprising takeaways from the 2023 SRE Report, download it here (no registration required).

This is some text inside of a div block.

You might also like

Blog post

Did Delta's slow web performance signal trouble before CrowdStrike?

Blog post

The hidden challenges of Internet Resilience: Key insights from 2024 report

Blog post

The curious case of Marriott and the untold impact of web performance on revenue