Blog Post

Front-end vs Back-end vs Network Performance

Published
November 3, 2016
#
 mins read
By 

in this blog post

In the past when the use of the word “developer” referred to the individuals who were writing code for the application and database. There then became a need to create a category to describe developers who write user facing code such as HTML, CSS, and Javascript. This led to the emergence of the terms front-end and back-end developers. More recently, the concept of the full-stack developer emerged to address the need for developers to work on both the front and back end.

From a development perspective, it makes sense to talk about front-end vs back-end; but when you start talking about application delivery or performance, there is something missing. When articles state web developers are responsible for every aspect of a website they are missing the big pictures. Agile methodologies and DevOps have made strides to attempt to break down the silos that have previously existed, but statements like these make it apparent these silos still exist. You can develop the most amazing website ever, but without a way to deliver it to end users nobody will enjoy it. Let’s break out of the silos and realize developers are a part of a broader team when it comes to delivering amazing applications.

The web performance community took a hold of the front-end/back-end classification and began describing performance in terms of front-end vs back-end and the Golden Rule of web performance emerged. Around 80-90% of issues related to performance are front-end issues, the remainder are back-end. The problem with this classification is the siloed focus on only looking at the application from the developer’s perspective. To say that 10-20% of performance issues are related to database lookups, compiling pages, and web service calls ignores the role and impact the network has on application performance.

There are countless ways to tune front-end and back-end performance. The table below lists just a few:

optimizations

These are all fabulous suggestions and should be followed, but what if your performance testing shows that you have a problem with packet loss or DNS? None of these recommendations apply.

Do 80-90% of performance issues have to do with the front-end? Yes. Are the remaining 10-20% all related to the back-end? No. Issues with DNS lookups, TCP connections, or SSL negotiation will have a negative impact on performance. These are part of the network infrastructure. The network has gotten more complex and is now extended to include third-party and cloud services. In these environments, assigning blame for issues with a DNS or CDN provider to the back-end is wrong.

From an application performance perspective, we need to look at the whole picture not just a piece of it and that includes the network. Digital experience metrics should be divided into network, front-end, back-end, and end-to-end to accurately portray where the issues are. Leaving out the holistic perspective that includes network, back-end and front-end leaves gaps in understanding the digital experience.

From an individual component level, metrics are divided into network and back-end. Nothing related to the request and delivery of a single resource is tied to the front-end.

componentbreakdown

But a web page isn’t about a single resource. Several hundreds of requests and responses are combined to deliver a web page to an end-user. With a standard waterfall, it is not possible to clearly illustrate where the demarcation is between network, back-end and front-end. To understand whether performance problems are related to front-end, back-end, or network, you have to understand what the metric is measuring and take some time to think about what that means.

Network timings

Issues on the network such as packet loss and latency impact application performance. When looking at page metrics, TCP connection times will reveal the impact latency or internet peering is having on an application. The higher the latency between two points the longer it will take for a TCP connection to be established. If there isn’t much latency between the end user and the application but TCP connection times are still high this could indicate an issue at the provider or a routing issue.

screen-shot-2016-10-18-at-1-30-19-pm

Back-end

While time to first byte (TTFB) AKA wait time can be influenced by network issues, it is the best metric for estimating back end time when measuring applications externally. TTFB shows us once a request was received how long did it take for the server to begin sending data. If DNS or TCP is higher than expected, then a high wait time may be more indicative of network issues.

screen-shot-2016-10-18-at-2-15-14-pm

On the other hand, if DNS and connection times are within normal ranges and wait time is high this would indicate issues at the back-end rather than the network.

screen-shot-2016-10-12-at-9-13-04-am

Front-end

Some may argue that everything after the base HTML page has been delivered is front-end, but that’s not accurate. Individual components beyond the base page are still impacted by network and back-end issues. Additional DNS lookups need to be performed for internal content on different domains or for third-party content, additional TCP connections and data transfers will still be impacted by packet loss and latency.

In the figure below, “blocked” time is the only metric that can 100% be called front-end time. Blocked time indicates that a request was ready to be made, but the browser was unable to issue the request. This may be because it was busy downloading other content, or was parsing a file previously downloaded.

waterfall

There are a number of front-end optimizations such as reducing the number of HTTP requests or controlling the order in which items load and execute at the browser to reduce the amount of time spent waiting for and downloading content to view a web page.

End-to-end

Many metrics used to measure web performance look at the whole picture. These metrics won’t help you isolate where application issues are occurring, but it will help you understand the application from the user’s perspective. As a user it’s not about whether there is an issue at the network, the back-end or at the browser – it’s about whether a task was able to be completed in a reasonable amount of time. Metrics like page load time, document complete, and speed index provide insight into the overall user experience.

“It depends”

My favorite answer to questions is “it depends.” There is no silver bullet or one size fits all answer.   You need to understand the context of the information being presented to know where to look to resolve issues. Knowledge of the application and the users is critical to understand what the performance metrics are telling you. Take a moment to think about what is being measured, how data is flowing, and where to look to resolve any issues.

Redirections may be a result of application logic or they may be a result of network issues. When taking real user measurements, timing begins with the Navigation Start event. This now includes REDIRECTS. If a user clicks on an ad and is then redirected through multiple ad servers before reaching your website all that time is added to redirect. In this case, redirection time would be attributed to the network. If redirections occur because a user is sent from www.example.com to m.example.com on their mobile device, this redirection time should be attributed to the back-end.

The words we use and our definition of them are important. When describing where problems exist, being as specific as possible is critical otherwise time will be wasted and frustration will occur. Don’t assume you know what somebody means when they use a term, take a few seconds and ask for clarification.

The table below is a quick cheat sheet on how some common metrics used to measure digital experience should be categorized:

metricclassification

In the past when the use of the word “developer” referred to the individuals who were writing code for the application and database. There then became a need to create a category to describe developers who write user facing code such as HTML, CSS, and Javascript. This led to the emergence of the terms front-end and back-end developers. More recently, the concept of the full-stack developer emerged to address the need for developers to work on both the front and back end.

From a development perspective, it makes sense to talk about front-end vs back-end; but when you start talking about application delivery or performance, there is something missing. When articles state web developers are responsible for every aspect of a website they are missing the big pictures. Agile methodologies and DevOps have made strides to attempt to break down the silos that have previously existed, but statements like these make it apparent these silos still exist. You can develop the most amazing website ever, but without a way to deliver it to end users nobody will enjoy it. Let’s break out of the silos and realize developers are a part of a broader team when it comes to delivering amazing applications.

The web performance community took a hold of the front-end/back-end classification and began describing performance in terms of front-end vs back-end and the Golden Rule of web performance emerged. Around 80-90% of issues related to performance are front-end issues, the remainder are back-end. The problem with this classification is the siloed focus on only looking at the application from the developer’s perspective. To say that 10-20% of performance issues are related to database lookups, compiling pages, and web service calls ignores the role and impact the network has on application performance.

There are countless ways to tune front-end and back-end performance. The table below lists just a few:

optimizations

These are all fabulous suggestions and should be followed, but what if your performance testing shows that you have a problem with packet loss or DNS? None of these recommendations apply.

Do 80-90% of performance issues have to do with the front-end? Yes. Are the remaining 10-20% all related to the back-end? No. Issues with DNS lookups, TCP connections, or SSL negotiation will have a negative impact on performance. These are part of the network infrastructure. The network has gotten more complex and is now extended to include third-party and cloud services. In these environments, assigning blame for issues with a DNS or CDN provider to the back-end is wrong.

From an application performance perspective, we need to look at the whole picture not just a piece of it and that includes the network. Digital experience metrics should be divided into network, front-end, back-end, and end-to-end to accurately portray where the issues are. Leaving out the holistic perspective that includes network, back-end and front-end leaves gaps in understanding the digital experience.

From an individual component level, metrics are divided into network and back-end. Nothing related to the request and delivery of a single resource is tied to the front-end.

componentbreakdown

But a web page isn’t about a single resource. Several hundreds of requests and responses are combined to deliver a web page to an end-user. With a standard waterfall, it is not possible to clearly illustrate where the demarcation is between network, back-end and front-end. To understand whether performance problems are related to front-end, back-end, or network, you have to understand what the metric is measuring and take some time to think about what that means.

Network timings

Issues on the network such as packet loss and latency impact application performance. When looking at page metrics, TCP connection times will reveal the impact latency or internet peering is having on an application. The higher the latency between two points the longer it will take for a TCP connection to be established. If there isn’t much latency between the end user and the application but TCP connection times are still high this could indicate an issue at the provider or a routing issue.

screen-shot-2016-10-18-at-1-30-19-pm

Back-end

While time to first byte (TTFB) AKA wait time can be influenced by network issues, it is the best metric for estimating back end time when measuring applications externally. TTFB shows us once a request was received how long did it take for the server to begin sending data. If DNS or TCP is higher than expected, then a high wait time may be more indicative of network issues.

screen-shot-2016-10-18-at-2-15-14-pm

On the other hand, if DNS and connection times are within normal ranges and wait time is high this would indicate issues at the back-end rather than the network.

screen-shot-2016-10-12-at-9-13-04-am

Front-end

Some may argue that everything after the base HTML page has been delivered is front-end, but that’s not accurate. Individual components beyond the base page are still impacted by network and back-end issues. Additional DNS lookups need to be performed for internal content on different domains or for third-party content, additional TCP connections and data transfers will still be impacted by packet loss and latency.

In the figure below, “blocked” time is the only metric that can 100% be called front-end time. Blocked time indicates that a request was ready to be made, but the browser was unable to issue the request. This may be because it was busy downloading other content, or was parsing a file previously downloaded.

waterfall

There are a number of front-end optimizations such as reducing the number of HTTP requests or controlling the order in which items load and execute at the browser to reduce the amount of time spent waiting for and downloading content to view a web page.

End-to-end

Many metrics used to measure web performance look at the whole picture. These metrics won’t help you isolate where application issues are occurring, but it will help you understand the application from the user’s perspective. As a user it’s not about whether there is an issue at the network, the back-end or at the browser – it’s about whether a task was able to be completed in a reasonable amount of time. Metrics like page load time, document complete, and speed index provide insight into the overall user experience.

“It depends”

My favorite answer to questions is “it depends.” There is no silver bullet or one size fits all answer.   You need to understand the context of the information being presented to know where to look to resolve issues. Knowledge of the application and the users is critical to understand what the performance metrics are telling you. Take a moment to think about what is being measured, how data is flowing, and where to look to resolve any issues.

Redirections may be a result of application logic or they may be a result of network issues. When taking real user measurements, timing begins with the Navigation Start event. This now includes REDIRECTS. If a user clicks on an ad and is then redirected through multiple ad servers before reaching your website all that time is added to redirect. In this case, redirection time would be attributed to the network. If redirections occur because a user is sent from www.example.com to m.example.com on their mobile device, this redirection time should be attributed to the back-end.

The words we use and our definition of them are important. When describing where problems exist, being as specific as possible is critical otherwise time will be wasted and frustration will occur. Don’t assume you know what somebody means when they use a term, take a few seconds and ask for clarification.

The table below is a quick cheat sheet on how some common metrics used to measure digital experience should be categorized:

metricclassification

This is some text inside of a div block.

You might also like

Blog post

When SSL Issues aren’t just about SSL: A deep dive into the TIBCO Mashery outage

Blog post

Preparing for the unexpected: Lessons from the AJIO and Jio Outage

Blog post

The Need for Speed: Highlights from IBM and Catchpoint’s Global DNS Performance Study