Webinar

Planet of the APIs: A Master Class on Monitoring Transactions in the Wild

APIs are the crucial, hidden heroes for today's connected world, but poor performance or failure can negatively impact user experience. Proper API monitoring and testing are essential.

Watch this technical session exploring how proactive API transaction monitoring can fulfill a variety of use cases, e.g. performance, regression, and functional monitoring. Learn best practices for monitoring APIs in real-world scenarios.

This webinar covers:

  • Intro to APIs, gateways, and architectures
  • Importance of API monitoring
  • How to build an API test script with validation, multi-step processes, and data extraction
  • Advanced tips for JSON parsing, macros, and smart variable usage
  • Integrating performance data with webhooks into your tools
Register Now
Webinar

Planet of the APIs: A Master Class on Monitoring Transactions in the Wild

Register Now

APIs are the crucial, hidden heroes for today's connected world, but poor performance or failure can negatively impact user experience. Proper API monitoring and testing are essential.

Watch this technical session exploring how proactive API transaction monitoring can fulfill a variety of use cases, e.g. performance, regression, and functional monitoring. Learn best practices for monitoring APIs in real-world scenarios.

This webinar covers:

  • Intro to APIs, gateways, and architectures
  • Importance of API monitoring
  • How to build an API test script with validation, multi-step processes, and data extraction
  • Advanced tips for JSON parsing, macros, and smart variable usage
  • Integrating performance data with webhooks into your tools
Video Transcript

Leo Vasiliou

00:18 - 01:37

Welcome to master class 4, our 4th and final master class of 2024, planet of the APIs. Dun dun dun.

We will explore the technical guts of API multistep transactions, monitoring, and some of the creative use cases for them. My name is Leo Vasiliou, former practitioner and current author of our annual site reliability engineering report, which by the way is coming out in January.

And I just want to take a moment to say thank you for giving us some of your precious time today. This one is sure to not disappoint.

I have the wonderful pleasure of being joined by 2 of the smartest engineers I know, Brandon and Nilabh. Brandon, if you don't mind kicking us off, you want to say hello?

Brandon DeLap

01:37 - 01:59

Thank you, Leo, and thank you everyone for, hopping on this webinar with us today. My name is Brandon Delap, Senior Solutions Engineer at Catchpoint.

I've been with Catchpoint for over 11 years now and, the overall topic of APIs is actually one of my favorites. So definitely looking forward to this, conversation with everyone today.

Nawa.

Nilabh Mishra

01:59 - 02:17

Hello, everyone. First of all, thank you so much for joining us today on one of my favorite topics as well, APIs.

I've been a part of Catchpoint for 9 9 years now and really looking forward to this webinar. Thank you so much.

Leo Vasiliou

02:17 - 09:08

Thank you, team, for the introductions. And, before we continue, we'll just cover a couple of housekeeping items.

First, if I could ask people who are on with us today to take a moment and find the q and a tab on, your screen. This is where if you have any questions along the way, please type them in as we go.

2nd, please take a moment to find the chat function and I guess just to, warm up a little bit, say hello from, and from where you are joining. Hello from Boston.

Alrighty. So here's a brief summary of today's talk tracks.

Alright. A basic intro, talk about some capabilities, the importance and evolution of APIs, deep dive into some of the use cases, and to close with some tips and tricks.

And as we mentioned, feel free to ask questions along the way. Without further ado, let's get into it.

Basic introduction, laying the foundation for this master class. Alright.

So we'll quickly start with what is an API. Now we put this slide here as a matter of diligence.

In other words, we hope you already have a basic understanding of APIs and appreciate their value. Right? API stands for application programming interface.

You send a request, an operation to the implementation, then implementation does some magical thing and you get a response. APIs power our integrations, they power e commerce, they power financial transactions, hell, they damn near power everything at this point.

But what we really want to say here is what is an API? APIs are the hidden heroes of today's digital world, and that hidden behind the scenes nature should not prevent them from getting the proper monitoring and observability attention that they deserve. Before we continue, let's make a distinction between the type of monitoring and observability we're talking about here in this masterclass, right, lay the foundation.

Proactive monitoring, telemetry from emulating client to server interactions, right, program a node, program a robot, vantage point, whatever you want to call the client to, for example, do a ping or an HTTP get once a minute to a web service, website API endpoint, whatever. Right? These telemetry will occur, per your configured schedule or ad hoc, even when users are not on your site, fulfilling use cases like validating releases in the middle of the night before your users wake up and start interacting.

This is versus passive monitoring. Right? The telemetry from already occurring interactions.

For example, your real user monitoring or rum as we might say. Now in the context of APIs, passive monitoring, might be an installed agent which reports how many, sessions are open to an endpoint.

Right? We've got a 1,000 open sessions or our throughput is whatever megabits per, per second, versus proactive monitoring, which might be, can all nodes around the world send a request to my critical APIs once submitted and assert that we get a 200 response and then alert me if it, doesn't. And having said that, what this master class is about is the proactive monitoring, to ensure availability, performance, functionality, your first party, third party, your multi party APIs.

We're going to spend the majority talking about the functions of, API multi step transactions. But first, we'll just give a quick nod to proactive monitoring as it pertains to availability and performance.

Right? Proactive monitoring for core uptime run tens, 100, thousands, whatever's checks a minute depending on the criticality of the API and alert me if there's a problem. Forgive me.

And then there's my favorite, performance in analytics. Alright.

How fast or slow is it? What's your p 25, p 50, your median, p 75, p 99, whatever. But also, what are those values across different metrics and dimensions? Like, is the response time, or is the request time going back to that what is an API slide, Right? Is that request time different based on where your users are, or is that response time different based on who the cloud provider is and being able to slice and dice those data? And it is that last part that is so critical, the different metrics dimensions to keep in mind as we say, today's API economy, an ecosystem is Internet centric, API first.

So proactive Internet centric API monitoring to ensure availability, performance, and functionality should be a part of your charter. And before handing off to Brandon, I would like to ask, please just take a quick peek at some of these different use cases or, sub use cases, if you will, for using proactive IPM for monitoring APIs from smoke, uptime, load, to integration, regression checks, and, we'll discuss some of them along the way.

And if there's any others you want to hear more info on, then please, please, please submit the poll at the end. So, hopefully, it didn't take too long to set the stage, give the core foundation for the rest of this master class.

Thank you for listening. And, Brandon, I guess I'll turn it over to you.

Brandon DeLap

09:08 - 13:26

Excellent. Thank you very much, Leo.

Everyone in the next section here, what I'm going to be walking through with Nawab is the importance and the evolution, of APIs. So let's go ahead and go back to the sixties seventies where we started with basic API requests to web services or web servers that were hosted, locally.

Then came the introduction of object oriented programming. And with it, a new approach to just overall software design and APIs and flexibility from the development, perspective.

Then came XML. Now this was a huge step.

It was one of the first approaches to enable communication between 2 different, web services. And again, just expanding on the flexibility and interoperability, of these APIs.

SOAP and REST was then introduced. These are these were new protocols to exchange structured information between web services.

So again, if you look back at the day, that this brings me back to, my days in college, you know, dealing with APIs. Twitter rolled out one of the first ones there, and really, really starting the digital transformation, when it comes to APIs.

Now with the big cloud boom, those services removed to, remote data centers as the user demands and just overall requirements changed in the day to day interaction with these digital, applications. And with that, the rise of the API economy.

And this is where APIs became extremely critical and an extremely critical component for any business to transform digitally in today's world. Now if we talk about current state and the future, everything we interact with from a digital perspective is most likely powered by an API as Leo mentioned.

And they tend to be the, you know, the hidden heroes, of the digital interactions that we have on our day to day basis. Microservices were introduced.

This allowed for greater modularity, scalability, and maintainability from within the application perspective. GraphQL was introduced by Facebook.

This allowed developers to only request the data they needed instead of having to request a large, you know, selection of data that you eventually sort through and filter out. API management platforms were also introduced like Apigee, MuleSoft, API gateways from AWS, Google.

Now these platforms provide tools for API design, security, monitoring, and monetization. This helped organizations manage their API ecosystems more effectively.

And I'm definitely looking forward to see what happens as technology continues to advance, as these APIs will play a pivotal role in connecting the world software and, obviously, enabling those next generation of applications. I'm sure we've all heard the word AI at least 12 times today already, and I'm definitely looking forward to see how that, impacts APIs.

Now as shown in the previous slides, API architecture has come a long way. Here, I'm just showing 3 different types of protocols or designs when it comes from an API comes to an API perspective.

And these just introduce more efficient ways to interact with web services through different protocols and designs. But, unfortunately, this efficiency has made gaining insight into the reliability of your APIs one of the most critical requirements for digital experience monitoring.

Now I'm going to pass it over to Nilabh. And if you don't mind, Nilabh, do you mind digging just a little deeper into why API monitoring is so important in today's world?

Nilabh Mishra

13:26 - 16:35

Absolutely. Thank you, Brandon.

Thank you for setting the perfect stage for me to come in and talk about why APIs are important and why is it equally important to monitor them. When it comes to APIs, even if you're not talking about APIs, if you just look at monitoring as a topic, the first question that I generally ask to myself when I get into conversations like these is why do we monitor anything for that case? When it comes to that question, the answer is pretty straightforward.

If you're looking at our daily lives, we monitor a lot of different things. We monitor our heartbeats, we monitor the steps that we walked, we monitor our stocks, and a lot of different things again.

The primary reason why we monitor is to be able to detect any aberrations or deviances, deviances that are not normal. For that same matter, API monitoring is also important because when you are constantly monitoring and collecting these measurements, it sets a benchmark.

When these measurements are out of the normal, it allows you to detect them. Of course, that also paves the way for you to detect issues that definitely need to be fixed in order to restore the state of the service to where it was, where it is expected to be at.

For that very reason, it's very important for us to have a monitoring strategy for APIs and for the other systems that we work on, that we look at to ensure that these are working as expected. Now if we if we focus at APIs, if we dig a little deeper, we have moved away from the monolithic system that that we saw in in your evolution talk.

And Given the fact that we are now focusing on microservice focused architecture when it comes to both developing and deploying applications, what has happened is it has led to a situation wherein there's a lot of decentralization. And from an API standpoint, as we all know, APIs are of different types.

Some are private, some are public, some are composite, some are partner, all different types of APIs. But when you're talking about a system that makes use of all of these different types of APIs to make functionalities work, it becomes equally important to get visibility into how all of these are working.

And even better, when there is a problem, if we are in a position to understand, if it's because of something that's under direct control of mine or if it's something that needs to be worked on with someone else. And that's where the topic of SLA also comes into picture.

But again, with that, being able to detect issues and being able to restore and fix them to the previous state is what is makes API is what makes API monitoring extremely important. With that, we are going to go back to Brandon and we are going to look at some of the use cases which is primarily the crux of this webinar today.

Brandon DeLap

16:35 - 23:00

Excellent. Thank you very much, Nilabh.

So the first use case that I'm going to touch on here would be about, ecommerce. I'm sure you've all heard this, before, you know, the age old tale.

Lost customers and revenue due to reliability issues on an ecommerce site. Well, each step of the user journey involves critical web services, and it's imperative to monitor the reliability of those web services from the end user's perspective in a proactive nature as Leo touched on, earlier.

So here's just a typical user scenario on any type of e commerce site. In this case, it's just I'm browsing for movies on a movie ticket app.

I'm selecting my seats. I see which ones are available.

I see which ones may cost more. All of these individual components on this page are being populated by multiple APIs.

Next step, I'm going to go ahead and insert my payment information. I'm going to hit submit, and I'm going to get a confirmation back from, obviously the application or the API in this in this case to let me know that my payment was successful.

And I now have this this movie ticket and I can go watch, you know, the latest Marvel movie, out in theaters during the holidays here. Now through Catchpoint's JavaScript API monitor, we can replicate each individual API call or execute them in a series of calls as well, like a scripted flow.

Now this is different than leveraging Selenium or Playwrights to emulate this user scenario. Now this is obviously an API webinar, so I'm going to stay focused on API.

But it is also possible to drill down into each individual API request from a Playwright or Chrome browser perspective. But in this case, what I have on my screen here is, we are essentially just executing a simple get.

I am looking for a specific movie, calling that from the API. I am extracting content from the response of the API, and then I'm asserting that I received the desired movie that I expected.

The next API monitor here would be to get some evaluable seats and then store those as a variable to leverage in a, subsequent, step. Now the next, type of API monitor here would be to, go ahead and post payment information.

So all that credit card name, address, and then also assert, that the checkout was successful. So through Catchpoint's API monitor, you have all of these, commands and functions available to you out of the box for you to build these step by step or, obviously, one individual API checks for that specific use case.

Excellent. Onto the next, use case here.

In this case, this is for streaming and media delivery. Again, I'm sure you've all experienced this, this one as well.

You finally get done for the day. Kids are asleep.

The dishwasher's all packed up, and you want to watch that latest episode of your favorite show. But unfortunately, the Internet has different plans for you.

It always seems like that's the case. You experience buffering issues.

The stream drops to a low quality. Everyone who's sitting there watching that show with you is obviously now having a poor experience from that streaming site.

But what if you could actually proactively monitor this experience and determine at a glance where the issue is? Whether it's your CDN vendor, different ISPs based on different regional locations, or is it just latency coming from that API or coming from your back end in your own systems? So in this scenario, I'm just going to go through, a couple of steps. The first one here, just walking through or browsing a guide, a live guide, trying to select a show.

Obviously, several APIs are involved if capturing or requesting which specific show you want to watch. What time is it is it available, what channel is it on.

And then finally, you find that, in this case, the NFL channel. You open that up and you start viewing that live stream from within your Apple TV or browser.

Now this script is a little longer, so I'm going to go ahead and break this down a little differently here. In this case, what you can see is we're loading a live stream.

We are capturing the CDN provider that is delivering that stream, storing that as a custom variable to then track and measure on a ongoing basis is whether or not, what CDN provider is delivering the content, is their latency coming from a specific CDN provider at that point in time. So script part 1 here, you can see here what we have available.

We have the ability to set custom headers, set variables. We can write custom functions to extract content and set variables.

We can then execute those functions. Now all of this can be done directly from within the Catchpoint API, script monitor.

So part 2 here, we can see we are getting a stream and you're getting a URL and extracting specific segments of the response. Now those would be the individual segments of that stream, then we can execute the loop through a function.

So, requesting those streams on an ongoing basis and extracting and asserting, whether or not those functions were, executed properly, and then set those custom metrics, from a CDN perspective. As you can see, even the most complex use cases can be emulated with our, JavaScript API monitor.

Now over to Nilabh, to wrap up the use cases here in this section.

Nilabh Mishra

23:00 - 35:40

Thank you, Brandon. The next 2 use cases, and I'm going to start with the 3rd one first, and this would be probably the 13th time for Brandon that we are going to use the word AI because we are going to talk about an LLM AI specific use case.

The idea here is to use this as an example to showcase the growing importance of large language model based APIs and the role that they are going to play in the development of applications that are going to power the next few years of application development. When it comes to LLM APIs and the use cases, there's a ton of them.

Be it AI assistance, chatbots, applications that are responsible for code generation or code refactoring, a lot of these applications that are increasingly becoming extremely important for us are powered or are going to be powered by these large language models. In this next use case, and this is a fun one because without getting into many complexities, what we did was we picked NVIDIA as NIM API.

And for folks who are not aware of the NIM API, it's an API by NVIDIA that can be used to deploy generative models at scale, can be deployed anywhere. So what we did was we used Meta's llama as a model, and then we used the Catchpoint's JavaScript engine to write a simple script to request or to prompt the LLM API engine to write a lime rig about the Catchpoint.

So on the left is the JavaScript, which of course includes important details such as the endpoint that we are querying, and this is a restful API. It is immediately followed by the payload that we are passing.

That's where the set navigate post data command comes into picture. And if we look at the payload, there are a few important things.

There's definitely the prompt, and then that's followed by the temperature. Because again, with these LLM APIs and engines, you need to define the temperature.

On the right is where you get the JSON output. Now I'm not going to read the line break that the system gave back, but if you go through it, it did not do a bad job.

That’s pretty much aligned to what we do as an organization. But the important point again team here is to understand how this is going to be extremely important, not just for the applications that we are going to build, manage, or scale, but also for the applications that we would be using that are going to be dependent on these large language model.

And of course APIs are going to play a key role connecting the 2 systems. Now when we move forward to the to the next one, and this again is is a is an important one because given the role that cloud is playing again in in the development of applications today, the cloud storage systems are have become extremely important.

Be it cloud native or be it a hybrid application that’s deployed in a in a hybrid model. We use these cloud storage systems like s 3 and, Google Cloud Storage for a lot of different use cases.

Helps us scale our applications, helps make sharing of files easier, helps with the redundancy. But when it comes to this particular use case, what we focused that was monitoring the 2 very critical components of a of a file storage system, and that's uploads and downloads.

So if you look at the applications today or if you look at some of the use cases that are powered by the cloud storage systems, being able to train models from an API standpoint definitely stands up. But then these are also useful when it comes to just being able to have the users upload files to systems like these which then again can be made available to the applications.

But in this particular example, and we don't have to, go line by line here when it comes to the script, it's almost 100 lines of code. But then if you look at the split, there are a few critical capabilities that we would want to highlight here.

And again, this is something which was earlier not possible. With the introduction of the credential library or the vault in Catchpoint, when it comes to writing your JavaScript and an API monitors, what you can do now is you need not expose your tokens or credentials.

You can integrate with the library and using the Catchpoint commands, you can reference your tokens and the and the keys that are extremely important when it comes to working with any kind of API. That's what the first section focuses at.

In the second section, what we are doing is something amazing. We are actually using AWS's S3 SDK and it's a single line of code there wherein we are importing that, allowing us to use it to test both upload and download from a functionality standpoint when it comes to these storage systems.

If this was not possible, then I probably had to go back and make use of, CryptoJS or a similar library to be able to generate hashes and, of course, follow, the standards that AWS expects for either downloading or uploading files. And as part of the script itself, since we are talking about multi step transactions, you do have capabilities wherein you can split your functions or your steps because that's very important, especially when you are looking at it from a performance standpoint or from an alert alerting standpoint because being able to split the API transactions allows you to then route the alerts to the necessary teams or just be able to look at the availability and performance numbers separated by functions.

And in the last section as part of the script itself, we also have the error handling logic included. So when it comes to error handling, one limitation that we have seen with API testing engines is especially when there is a failure, it becomes a little tricky to figure out where exactly the failure happened or what the reason was.

But in this case, because of the customizations that are available, you can write your own error handling logic and you can either push it via a standard output or you can just expose it in the script itself allowing us to handle those errors much better. The second part of the script again is a few lines of code again, but it is the promise handling and the multi part upload download logic that's in built.

But again, the overall high idea here is to highlight some of the capabilities that are available today, which would allow us not to just look at the straightforward sequential use cases wherein you query an API, you get the response, and then you and then you move forward with your transaction, but also use cases like these wherein you can do a lot more using the same engine, using the same capabilities. So that's the idea.

Now in the next few slides, I'm going to spend some time talking about the onboarding of these scripts and test cases in Catchpoint because that's the immediate next set of, topics that that come into picture. So from a creation standpoint, again, there are a few different ways in which you can onboard your API scripts or test cases in Catchpoint.

The first one is the straightforward UI based approach, wherein you interact with the GUI of Catchpoint and then create these test cases. The second one, and since it's an API based webinar, I cannot not talk about the API capabilities that Catchpoint offers to onboard these test cases.

So the second method involves making use of the restful APIs from Catchpoint itself to onboard these test cases. The third one is an extension to the API based setup wherein you can take this 1 notch or 2 notches, I would say, above and implement the logic of the test creation as part of your continuous integration and deployment pipelines.

Because as we know, we all are moving towards automation. We all want to build systems and manage systems which require no touch to low touch.

Right? So in in this case, that's where the CICD integration piece comes into picture, the other integrations come into picture which can then allow us to onboard these test cases. Now creating these tests is step 1.

A lot of times especially when dealing with APIs, it's important for us to enable certain additional capabilities, which we call as additional capabilities or additional settings in Catchpoint, allowing us to, in some cases, enable additional debug options or enable options such as header and response capture, which are extremely important when you are triaging API related issues. So using these extended capabilities, you can add to the telemetry that you are already capturing, but you can also enable debugging at a at a much granular level.

There is another component because we have been talking about APIs, but then APIs, like any other thing on the Internet, works on the same fundamental principles that power the Internet. So your DNS, your TCP, your network still plays a very important role even when it comes to APIs.

Because let's say if the endpoint is not reachable from a machine or from an end user's perspective, Whether it's available or functionally available, it does not matter because that path is broken. That's also where you have additional capabilities like enabling path capture or network traffic capture, helping you get visibility into those components.

Now in the next few slides, I'm just going to talk or showcase what the output of enabling some of those options look like because this would be incomplete if we just looked at the setup and not focused that what it's going to give us back. This is one of the most important ones because Leo initially touched upon the com the concept of synthetics or proactive versus reactive monitoring.

The option that we saw earlier in the previous slide which allows us to enable tracing brings the 2 together, gives us that stereo vision because synthetic allows us to understand what's going on from a proactive standpoint. But when you have tracing enabled, it also allows us to connect the synthetic runs to the actual system traces giving us more visibility, more control into what's going on for an application, for a microservice specific architecture, giving us a lot of additional granular details making the actions that we take at our end a lot more actionable.

Quickly moving on to the next few slides, the request and response catch capture, we touched upon this when talking about the additional capabilities. This again, when enabled, gives us outputs like these, which are again very helpful.

For example, when you are using an API gateway or when there is a problem that that you are debugging, being in a position wherein you have access to these request and response body captures makes that triaging process a lot more easier, makes it a lot more seamless, from a troubleshooting standpoint. And this again is just a path visualization.

This is giving us a visual of what that network map looks like when enabled, especially when we are looking at that end to end. Right from the end user to the network till the till the traces that we that we saw earlier.

And with that, I'm going to hand it back to Brandon so that we can spend some more time looking at the visualizations that that you can build on top of these datasets.

Brandon DeLap

35:40 - 40:11

Excellent. Thank you very much, Nilabh.

So, as Nilabh mentioned, I'm going to be walking through the visualizations component of API monitoring with catch points. So the a little more of the outcome.

Right? What can the users use within the form to, at a glance or drill deep into what's actually going on from the API or web service health perspective. So to get started, we have default or custom dashboards.

These allow you to, pull out or choose or select, specific APIs or web services that are critical, for your user scenarios. Whether that's a checkout flow, loading a stream, or as Nilabh mentioned, uploading files to s 3.

We also allow you to break down key metrics. So as Nilabh touched on earlier, we capture the individual request components for each one of these APIs or web services that you're executing from within Cached Point.

And you have the ability to set your own thresholds, dynamic thresholds as well on these custom dashboards. The next view here would be smart boards.

This is again another type of overview, but in this case for individual tests. It is an AI powered visualization, so it'll allow you to take some of the guess work out of, well, what happened when we were executing those API transactions within Catchpoints? Were there any key events? Were there any specific trend shifts? What's the overall experience score for that specific API that you're executing? And then down into the waterfall or records view for the atomic level.

Old data fidelity. We store these in their most granular form.

So you do have the ability to go into those individual API requests, individual requests, and see exactly what the server responded with from a header and a content perspective. Now on to stack map.

Stack map is a visualization with all of your services at a glance, internal and external dependencies, and how they impact or interact with your web services. One click isolation into those impacted services, in this case, my auth service within your Internet stack.

And of course, the individual records view, allowing you to see exactly how your server responded during that test execution. In this case, 401 unauthorized.

Now when you send traces, to Catchpoint, you can also visualize the service dependencies for your real time and synthetic traffic. So those same synthetic executions that you're sending through Catchpoint, you can also see the full trace, end to end into your application, stack.

This will highlight at a glance at a glance, which services may be currently impacting your applications due to their service health. You can see whether or not there was some sort of increase or decrease in response time and errors, number of requests.

Again, this would be passive data from within the your application. We can then click again further into those, specific, services, see a little more high level detail into what's the trend for that service.

And then you can drill into the individual traces themselves showing you exactly, again, how your server responded at that point in time. In this case, we're seeing a SQL exception error from the SQL query.

And, of course, a webinar titled planet of the APIs will include even more APIs. We have out of the box integrations, done through APIs, of course, with your collaboration, DevOps, APM, alerting, and in analytic tools.

Now I'm going to go ahead and pass it back over to Nalab so that he can walk us through some ticks tips and tricks on how to build resilient API monitors.

Nilabh Mishra

40:11 - 47:26

Awesome. Thank you, Brandon.

As part of the use cases, we did focus on some of those options when it comes to, you know, building resilient API monitors that you all can use. But then there are a few that stand out and especially when it comes to working with APIs, working with JSON or XML data.

These are some of the important ones that we should definitely be looking at when when building these API scripts and monitors. Perfect.

So the first set of commands and from a tips standpoint, especially when you are working with APIs, be it REST or GraphQL based, it's going to be sequential. You are going to reuse the values from the response for one call and maybe the next call or maybe 2 calls after.

So let's say if you made a call, an endpoint call to generate an access token, in the next call, you would need to pass that as an authorization header. When it comes to data extraction, the library by default, out of the box, supports certain commands like extract and extract child, which makes it extremely easy for us to use single lines of code to just extract the values from the JSON response or the request or the headers or the URL, again, from any of those destinations to be made available in that API script.

So in this case, we are going to extract that access token and then reuse it by passing it as an authorization header in the in the next step. So extract and extract child, 2 very important commands when it comes to writing API scripts that are resilient.

Now the second and this one is a continuation or an extension of the first one because when it comes to extraction, you need to pattern match or you need or you need a capability that allows you to read JSON data. As of today, you have 2 options.

You can either go with regex based pattern matching or you can or you can use the JSON path parsing method. Personally, I find the JSON path parsing a lot more easier to use, especially when I'm dealing with JSON because it allows you to write simpler queries, and you don't have to spend a lot of time trying to figure out how to make a regex work.

But, again, the idea here is to is to make use of the extract command in combination with the JSON or regex based pattern matching to be able to tell the system which values are important and need to be extracted and saved for for future use. Now the extraction is pretty good and as long as it's being used in the in the script itself at a later stage, it's perfect.

It makes us do a lot of different things like the token example. But a lot of times you might get into a situation wherein you would want to save these extracted values as string values or numbers like metrics to be used at a later stage.

So let's say, if you wanted to capture for an inventory call, how many SKUs were being returned for an ecommerce organization. So let's say you you make a search call, you may use the search microservice, and it gets back with x number of items or SKUs.

You may want to capture that value and save it so that that can be plotted as a custom metric in Catchpoint. So when it comes to capturing either metrics or dimensions, there are 2 specific commands.

For strings, it's the tracepoint or the set tracepoint command. And for metrics or numerical values, it's the set indicator command that can be used to not just capture those values but also to save them so that they can then be used in in the analysis engines, in your reporting, as well as when it comes to you building an alerting strategy on on top of those.

So just to give a quick recap, the extraction using extract and extract child, which of course includes reget space and JSON path parsing, in combination with the insights and trace points will give you that end to end setup, which you can then use to extract whatever values are important to you and then of course use it for for future use. And then this is the last slide when it comes to this webinar.

It definitely goes, in in a little more detail when it comes to highlighting some of the common verbs and macros that you can make use of. Because, again, the reason why JavaScript API scripting is easy is because you can write custom code.

What makes it even more easier is when you don't have that right if you don't have to write that code and you just have to use a simple command that's natively supported in Catchpoint for doing the exact same operation. So as we work more and more with APIs, there are certain there are certain fields such as time which requires a lot of manipulation.

I'll give you a simple example. Let's say you're writing an API script to do a search for flights using a travel aggregator, and you want to write an API script that monitors that functionality end to end.

In a lot of cases, your script would not be resilient because you would be hard coding the the time of booking. And the script is going to start fail is going to start to fail when when you hit the time.

So when it comes to time manipulation, there is the time and time trim macro that is natively available to you when you're writing your scripts, which you can use to manipulate the time. So in this case, in your script, you can just use that macro to maybe pick a time that is in the future or a date as well.

Again, allowing you to write scripts that are a lot more resilient, which are not going to fail and allowing you to handle that, that logic. Apart from time frame, there are a few others.

Like, there's one random GUID for for GUID generation. Extremely helpful if you want to bust the CDN cache and pass, like, a random value so that you get fresh values, you know, served to you bypassing your CDN and hitting directly the origin.

But again, there's a lot of these macros and variables available today. The idea here is to make use of these so that we can write resilient and embedded scripts.

Awesome.

Leo Vasiliou

47:26 - 49:35

Alright. So, first, let me take a moment to say, thank you, Brandon.

Thank you, Nilabh, for the wonderful talk through. I mean, I personally enjoyed some of the examples, the deep dive.

Right? And, as I tried to politely say at the beginning, if we had to explain what an API was, you know, this masterclass might not have been for you. So we hope you already had a basic understanding and already appreciate, the value.

So reinforcing some of the options and as we work to, sorry, options reinforcing the key themes that we talked about when we were setting the foundation, I also appreciate that as well. So as we, work to close the webinar, If you're interested in hearing more about some of the use cases, maybe you want to go deeper, please navigate to the polls tab, answer our question, and even if you don't want to go deeper, maybe you want to hear about some other solutions or maybe have us have a conversation with, somebody else in your organization, then please go to the polls tab, as well.

Otherwise, if there are any other questions or comments, feel free to take them into the q and a tab or the chat, and we will address them. Otherwise, we'll say thank you again for giving us some of your precious, time today.