Skip to content
James Broome By James Broome Director of Engineering
How to use Axios interceptors to poll for long running API calls

This post looks at how Axios interceptors can be used to centralise polling logic in your UI application for long-running async API calls. Many frameworks (including endjin's Marain.Operations framework and Azure Durable Functions), implement the async HTTP API pattern to address the problem of coordinating the state of long-running operations with external clients.

Typically, an HTTP endpoint is used to trigger the long-running action, returning a status endpoint that the client application can poll to understand when the operation is finished.

When the calling client is a user facing application - for example a web UI - typically the client will want to wait to see what happens with the long-running operation, so that it can inform the user if something went wrong.

In a modern single-page web application, backed by an async RESTful API this requirement is fairly common, and the rest of this post looks at how to simplify and centralise the UI logic required to work with this API pattern.

The background

This example specifically describes an approach based on a Nuxt.js application (a framework built on top of Vue.js), using the @nuxt/axios module as a JavaScript HTTP client. However, the principles and approach apply to any JavaScript client that can use the npm Axios module.

It's based around the built in support for interceptors within axios, which are centralised hooks that will run on any request/response/error. They can be configured easily at the time of configuring your axios module:

Axios interceptors example

The Nuxt-specific version of the module takes this a step further, exposing helpers that can be used inside a plugin to extend the behaviour of the axios module:

Axios helpers example

These interceptors can be used to define our polling logic, meaning it can be centralised so that any API calls will automatically follow the same pattern.

The approach

The long-running async API pattern works as follows:

  1. The client application sends an HTTP request to trigger an long-running operation - this is typically going to be limited to actions that perform some kind of state update i.e. a POST or a PUT request
  2. The response will return an HTTP 202 Accepted status, saying the request has been queued for processing
  3. The response will also include a Location HTTP Header, specifying the URI of the status endpoint
  4. The client application will poll the status endpoint to retrieve the status of the operation (Waiting, Running, Succeeded, Failed etc).
  5. Once the operation has completed, the status endpoint will typically return another Location HTTP Header, specifying the URI of the resulting resource (i.e. the resource that has been created/updated)
  6. The client application issues a request to this URI to retrieve the resource

The following solution applies that pattern inside the Axios interceptor/helper function so that any code that issues HTTP requests will automatically follow the steps above.

The solution

export default function ({ $axios }) {

    // Axios interceptor to handle responses that need to be polled
    $axios.onResponse(async response => {

        // Use the 202 response code as an indicator that polling is needed
        if (response.status === 202) {

            console.log("HTTP 202 received, polling operation...");
            console.log("Operation running at " + response.headers.location);

            // Retrieve the initial operation status
            let pollingResponse = await $axios.get(response.headers.location);

            console.log("Operation status is " + pollingResponse.data.status);

            // Loop while the operation is still in progress...
            while(pollingResponse.data.status !== "Succeeded" && pollingResponse.data.status !== "Failed") {

                setTimeout(async function () {
                  pollingResponse = await $axios.get(response.headers.location);

                  console.log("Operation status is " + pollingResponse.data.status);
                }, 2000);
            }

            if (pollingResponse.data.status === "Failed") {
                // Treat failures as exceptions, so they can be handled as such
                throw 'Operation failed!';      
            }
            else {

                console.log("Operation succeeded!");
                console.log("Retrieving resource at " + pollingResponse.data.resourceLocation);

                // Once operation succeeded, return response from final resource location
                return await $axios.get(pollingResponse.data.resourceLocation);
            }
        }

        // If not a 202 response, then return as normal
        return response;
    })
}

Once the interceptor logic is in place, any requests that we need to make that return an HTTP 202 code will automatically be polled. The calling code will receive the response from the resulting resource location, without having to know or care about the long-running operation, as shown in the following example:

async testFunction(payload) {

    try {

        const response = await this.$axios.put("url/to/update/resoure", payload);

        // At this point, the response is the updated resoure
        // as all the polling has been taken care of

        alert(response);

    }
    catch (err) {

        // If the long running operation failed, we can do handle 
        // that here...
    }
}
Azure Weekly is a summary of the week's top Microsoft Azure news from AI to Availability Zones. Keep on top of all the latest Azure developments!

FAQs

What are Axios interceptors? Interceptors are a way to add hooks to every request, response or error, to extend the Axios module with custom logic.
How do you implement long-running asynchronous APIs? Typically, an HTTP endpoint is used to trigger a long-running action, which returns an HTTP 202 Accepted response and the location of a status endpoint, that the client application can poll to understand when the operation is finished.
Azure Analysis Services - How to process an asynchronous model refresh from .NET

Azure Analysis Services - How to process an asynchronous model refresh from .NET

James Broome

Integrating Azure Analysis Services into custom applications doesn't just mean read-only data querying. But if your application changes the underlying model, it will need to be re-processed before the changes take effect. This post describes how to use the REST API for Azure Analysis Services inside a custom .NET application to perform asynchronous model refreshes, meaning your applications can reliably and efficiently deal with model updates.
Long Running Functions in Azure Data Factory

Long Running Functions in Azure Data Factory

Jess Panni

While on first inspection Azure Function look like a good candidate for long running operations as they can run for 10 minutes on a Consumption plan, HTTP Triggers only run for 230 seconds because of a limitation with the Azure Load Balancer. In this blog post Jess Panni demonstrates how Durable Functions can be used instead.
Exposing legacy batch processing code online using Azure Durable Functions, API Management and Kubernetes

Exposing legacy batch processing code online using Azure Durable Functions, API Management and Kubernetes

Jonathan George

In this post we show how a combination of Kubernetes, Azure Durable Functions and Azure API Management can be used to make legacy batch processing code available as a RESTful API. This is a great example of how serverless technologies can be used to expose legacy software to the public internet in a controlled way, allowing you to reap some of the benefits of a cloud first approach without fully rewriting and migrating existing software.

James Broome

Director of Engineering

James Broome

James has spent nearly 20 years delivering high quality software solutions addressing global business problems, with teams and clients across 3 continents. As Director of Engineering at endjin, he leads the team in providing technology strategy, data insights and engineering support to organisations of all sizes - from disruptive B2C start-ups, to global financial institutions. He's responsible for the success of our customer-facing project delivery, as well as the capability and growth of our delivery team.