<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 200 150"> <path fill="currentColor" d="M89.93,81.2075A11.5312,11.5312,0,1,1,78.3993,69.7127,11.353,11.353,0,0,1,89.93,81.2075Zm-5.0478,0a6.5,6.5,0,1,0-6.4833,6.9653A6.62,6.62,0,0,0,84.8826,81.2075Zm29.9238,0a11.5311,11.5311,0,1,1-11.5311-11.4948A11.3529,11.3529,0,0,1,114.8064,81.2075Zm-5.0477,0a6.5,6.5,0,1,0-6.4833,6.9653A6.62,6.62,0,0,0,109.7587,81.2075Zm28.8873-10.8V91.0439c0,8.489-5.0063,11.9561-10.9247,11.9561a10.9465,10.9465,0,0,1-10.1889-6.7735l4.3948-1.8295a6.35,6.35,0,0,0,5.7889,4.0787c3.7884,0,6.1361-2.3373,6.1361-6.7373V90.0852h-.1762a7.8264,7.8264,0,0,1-6.0532,2.612A11.379,11.379,0,0,1,116.61,81.249c0-6.4885,5.2655-11.5363,11.0129-11.5363a7.966,7.966,0,0,1,6.0532,2.57h.1762V70.4123h4.7938ZM134.21,81.249c0-4.0476-2.7-7.0068-6.1361-7.0068-3.4827,0-6.4,2.9592-6.4,7.0068a6.5972,6.5972,0,0,0,6.4,6.9238C131.51,88.1728,134.21,85.2551,134.21,81.249Zm12.3395-22.9378V91.9975H141.626V58.3112Zm19.1857,26.68,3.918,2.6119a11.4415,11.4415,0,0,1-9.5773,5.0945A11.2467,11.2467,0,0,1,148.669,81.2023c0-6.8357,4.9182-11.4948,10.8418-11.4948,5.9651,0,8.8828,4.7472,9.8364,7.3125l.5235,1.306L154.5045,84.69a5.85,5.85,0,0,0,5.5712,3.4826,6.56,6.56,0,0,0,5.6593-3.182Zm-12.06-4.1357L163.9471,76.59a4.45,4.45,0,0,0-4.2653-2.4358A6.3051,6.3051,0,0,0,153.6753,80.8551ZM48.2319,78.2172V73.3405H64.6656a16.1539,16.1539,0,0,1,.2436,2.9436c0,3.6589-1,8.1832-4.2238,11.4068A16.3215,16.3215,0,0,1,48.237,92.6972,17.85,17.85,0,1,1,48.237,57a17.0074,17.0074,0,0,1,12.2308,4.9182l-3.4412,3.4412a12.431,12.431,0,0,0-8.7947-3.4827,12.973,12.973,0,0,0,0,25.9437A11.9411,11.9411,0,0,0,57.2443,84.25a10.1085,10.1085,0,0,0,2.643-6.0377Z"/> </svg>
Google needed to analyze vast volumes of data in as little time as possible to determine long-term benchmarks and trends, reduce latency, drill down into specific data over long periods of time, and make data easily accessible for the many people and departments to whom it may be relevant.
Google partnered with Catchpoint to:
o Provide active observability data across their digital properties and networks.
o Provide active observability data across their digital properties and networks.
o Push their performance data to a designated endpoint for storage and analysis.
o Integrate Catchpoint alerts with their own alerting tool to become more proactive.
Catchpoint’s webhooks give us the control and flexibility to visualize and analyze our data, and integrate it with our alerting tools. With this tool, we were able to use Catchpoint's real-time measurements to pinpoint and resolve Google Public DNS latency. Instead of a long process, we were able to get at it almost instantly, and turn around the problem in just minutes instead of tens of minutes.
Problem
As one of the largest enterprise companies in the world, Google has a massive amount of digital properties under its control which require constant internal and external monitoring efforts to maintain the technology brand’s reputation for digital excellence.
In order to ensure excellent performance across their many different digital properties, Google must be able to collect, store, and analyze huge amounts of data in as little time as possible. A traditional REST API solution is unable to satisfy this need due to the system limits that cap the number of requests you can do in an allotted period of time. Instead, they require a way to collect and store all of the data as it comes in so that they can analyze it in real time.
Google must be able to analyze this information across months and years, be it for determining long-term benchmarks and trends, or for drilling down into specific data over long periods of time.
Additionally, due the scope of the organization, the data must be able to be stored in a place that’s easily accessible for the many different people and departments to whom it may be relevant.
Solution
To manage all of this data, Google’s Site Reliability Engineering (SRE) team relies on Catchpoint’s Test Data Webhook feature. This tool allows the client to select which of their tests are going to push Catchpoint data to a specified endpoint in real time, where it can then be integrated with any number of third party tools for storage and visualization; in Google’s case, this is done using their own in-house tools such as Google Data Studio.
By enabling the Test Data Webhook, Google’s performance data is pushed to their designated endpoint every single time they run a test within the Catchpoint platform, where they can then execute their ETL (Extract, Transform, Load) paradigm. In doing so, they are able to overcome the system limits of the REST API to handle all of their performance data as soon as it’s collected by Catchpoint, as well as store it for even longer than Catchpoint’s industry-leading three-year storage offering.
After the test data is collected from the test target by the Catchpoint node, the information is compiled and put into a JSON format (XML is another formatting option) before being sent to Google’s endpoint, where it posts to an AppEngine that lives on the Google platform. There it undergoes the ETL functions and is then sent and stored using Cloud Bigtable, from which it is visualized and analyzed using Data Studio or any other visualization tool that they wish (e.g., Grafana, Geckoboard, etc.).
Results
The measurements that Catchpoint provides have enabled Google to detect performance issues in multiple digital properties under their control, including both their Public DNS and their backbone infrastructure.
In the case of Google Public DNS, the service was experiencing very high query latency, which is undetectable under their own internal monitoring because there is no way of knowing how long a DNS answer is received by the client once it is sent due to a lack of TCP Connection; essentially, there is no way for them to measure the round trip time between the client and the server.
With Catchpoint, however, Google’s SRE team was able to detect issues of query latency from a network perspective, specifically by identifying some ASNs that were experience the most latency. From there, the SRE team could drill down directly to where the problem was, rather than having to go back and forth between the ISP that had reported the problem and their customers, and then the SRE support team. Ultimately, they were able to detect and fix the problem in just a few minutes, when it ordinarily could have taken close to an hour.
Furthermore, whenever Google has problems on their backbone that require a post-mortem, one of the things that they’re interested in is learning how those failures have affected their Cloud product. Because anybody in the company can have access to the data once it reaches the data store, the appropriate people can go in and perform the analysis themselves to include in the post-mortem or performance report without having to rely on someone who has direct access to the Catchpoint platform to create a report for them, thereby functioning as a time saver for multiple teams.