Exponential Backoff with Jitter for the Fetch API

Imagine you’re trying to get data from an API, but sometimes it fails due to temporary network hiccups or the server being overloaded [[31]]. Instead of giving up or spamming the server immediately, a smarter approach is to wait a bit and try again. Exponential Backoff with Jitter is a resilient pattern for this [[6]].

Exponential Backoff + Jitter

The Problem

A simple retry might look like this, but it’s not very smart:

// Simple, but not recommended - could overwhelm the server
async function simpleFetch(url, retries = 3) {
  for (let i = 0; i <= retries; i++) {
    try {
      const response = await fetch(url);
      if (response.ok) return response.json();
    } catch (error) {
      if (i === retries) throw error; // Throw on last attempt
      // Wait 1 second before retrying - same delay every time!
      await new Promise(resolve => setTimeout(resolve, 1000));
    }
  }
}

The Solution: Exponential Backoff + Jitter

  1. Exponential Backoff: Instead of waiting the same time each retry, we double (or exponentially increase) the wait time after each failure [[19]]. This gives the server more time to recover. For example, wait 1s, then 2s, then 4s, then 8s [[16]].
  2. Jitter: If many clients retry at exactly the same time (e.g., all after 1s, 2s, 4s), it can create a “thundering herd” problem, overwhelming the server again when it might just be recovering [[21]]. Jitter adds a random fraction to the delay, spreading out the retry attempts [[22]].

Here’s a simple example implementing both:

async function fetchWithBackoff(url, options = {}, maxRetries = 3, baseDelay = 1000) {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      const response = await fetch(url, options);

      // If the response is successful (status 2xx), return the data
      if (response.ok) {
        return await response.json();
      }

      // For 5xx server errors, we might want to retry
      if (response.status >= 500) {
        console.log(`Attempt ${attempt + 1} failed with status ${response.status}. Retrying...`);
      } else {
        // For other errors (like 4xx), it's probably not worth retrying
        throw new Error(`Request failed with status ${response.status}`);
      }
    } catch (error) {
      // Network errors or other exceptions
      if (attempt === maxRetries) {
        // If it's the last attempt, re-throw the error
        throw error;
      }
      console.log(`Attempt ${attempt + 1} failed: ${error.message}. Retrying...`);
    }

    // Calculate delay with exponential backoff and jitter
    // Base delay * 2^attempt number
    const exponentialDelay = baseDelay * Math.pow(2, attempt); // e.g., 1000, 2000, 4000...
    // Add jitter: a random value between 0 and the exponential delay
    // This prevents synchronized retries [[21]]
    const jitter = Math.random() * exponentialDelay;
    const totalDelay = exponentialDelay + jitter;

    // Wait for the calculated delay before the next attempt
    // Cap the maximum delay to avoid waiting too long
    const cappedDelay = Math.min(totalDelay, 10000); // Max 10 seconds
    console.log(`Waiting ${cappedDelay}ms before retry...`);
    await new Promise(resolve => setTimeout(resolve, cappedDelay));
  }
}

// Usage Example
fetchWithBackoff('https://httpstat.us/500') // This endpoint returns a 500 error
  .then(data => console.log('Success:', data))
  .catch(error => console.error('All retries failed:', error));

In this example:

  • We try to fetch a URL.
  • If it fails (network error or 5xx status), we calculate a delay.
  • The exponentialDelay doubles each time (1s, 2s, 4s…).
  • The jitter adds a random amount (up to the current exponential delay) to that time.
  • We wait for the totalDelay (capped at 10 seconds) before trying again.
  • This process repeats until we succeed or hit the maxRetries limit.

Retry Only on Safe or Idempotent Requests

While retrying failed requests makes your app more resilient, you must be very careful about which requests you retry.

Not all HTTP methods are safe to retry automatically:

  • Safe to retry (idempotent): GET, HEAD, OPTIONS, DELETE
    These can be repeated without changing the outcome beyond the first execution. For example, fetching data (GET) or deleting a resource (DELETE) multiple times has the same effect as doing it once.

  • ⚠️ Use caution with: PUT, PATCH
    These are usually idempotent (e.g., updating a user profile with the same data), but not always. Be sure the operation is safe to repeat.

  • Never blindly retry: POST
    POST typically creates new resources. Retrying a POST request could result in duplicate orders, payments, messages, or users — a serious problem!

When is it okay to retry a POST?

Only under specific conditions:

  • The server returns a 429 Too Many Requests or 503 Service Unavailable status and includes a Retry-After header.
  • You are certain the operation is idempotent (e.g., using an idempotency key in the request).
  • You have explicit confirmation from the API documentation that retries are safe.

✅ Smart Retry with Method & Status Checks

Let’s improve our fetchWithBackoff function to avoid retrying unsafe methods:

async function fetchWithBackoff(url, options = {}, maxRetries = 3, baseDelay = 1000) {
  const { method = 'GET' } = options;

  // Never retry POST requests unless explicitly allowed
  if (method.toUpperCase() === 'POST') {
    console.warn('POST requests are not retried by default for safety.');
    // Only proceed without retry logic
    const response = await fetch(url, options);
    if (!response.ok) throw new Error(`POST failed with status ${response.status}`);
    return response.json();
  }

  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      const response = await fetch(url, options);

      if (response.ok) {
        return await response.json();
      }

      // Only retry on server errors (5xx) or specific retryable statuses
      const retryableStatuses = [500, 502, 503, 504, 429]; // 429 = Too Many Requests
      if (!retryableStatuses.includes(response.status)) {
        // Don't retry on client errors like 400, 404, 401, etc.
        throw new Error(`Request failed with status ${response.status}. Not retrying.`);
      }

      console.log(`Attempt ${attempt + 1} failed with status ${response.status}. Retrying...`);

    } catch (error) {
      if (attempt === maxRetries) {
        throw error; // Final attempt failed
      }
      console.log(`Attempt ${attempt + 1} failed: ${error.message}. Retrying...`);
    }

    // Exponential backoff + jitter
    const exponentialDelay = baseDelay * Math.pow(2, attempt);
    const jitter = Math.random() * exponentialDelay;
    const totalDelay = exponentialDelay + jitter;
    const cappedDelay = Math.min(totalDelay, 10000); // Max 10 seconds

    console.log(`Waiting ${cappedDelay}ms before retry...`);
    await new Promise(resolve => setTimeout(resolve, cappedDelay));
  }
}

✅ Best Practices Summary

ScenarioShould You Retry?Notes
GET fails (5xx, network)✅ YesSafe and idempotent
DELETE fails✅ YesUsually idempotent
PUT / PATCH fails⚠️ CarefullyOnly if idempotent
POST fails (5xx)❌ No (by default)Risk of duplicates
POST with 429 + Retry-After✅ Only if safeRespect the header and idempotency
404, 400, 401 errors❌ NoClient-side issues, not temporary

Final Thoughts

Adding exponential backoff with jitter makes your app resilient. But combining it with smart retry logic based on HTTP method and status code makes it safe and production-ready.

Always ask:

“If this request runs twice, will it break something?”

When in doubt — don’t retry.


Let me know if you’d like to add support for Retry-After header handling or idempotency keys in the next iteration!

JavaScript Fetch API exponential backoff jitter retry logic API retry error handling fetch with retry network resilience front-end optimization web development