Error Medic

Notion API Rate Limit & 502 Errors: Complete Troubleshooting Guide (HTTP 429 & Bad Gateway)

Fix Notion API rate limit (HTTP 429) and 502 Bad Gateway errors with exponential backoff, request queuing, and retry logic. Step-by-step fixes with code example

Last updated:
Last verified:
1,910 words
Key Takeaways
  • Notion enforces a hard rate limit of 3 requests per second per integration token; exceeding it returns HTTP 429 with a Retry-After header you must honour
  • 502 Bad Gateway responses from the Notion API are transient server-side errors unrelated to your quota — they require idempotent retry with backoff, not credential changes
  • The fastest fix for both errors is a single wrapper function that reads Retry-After on 429 and applies exponential backoff with jitter on 429/502/503, capping at ~32 seconds between attempts
  • Bulk operations (database queries, page property updates) must be serialised or queued client-side; Notion has no batch-write endpoint, so parallelising N updates fires N individual requests against the same quota
  • Persistent 502s lasting more than 5 minutes almost always indicate a Notion platform incident — check status.notion.so before spending time debugging your code
Fix Approaches Compared
MethodWhen to UseEstimated Time to ImplementRisk / Trade-off
Exponential backoff + jitter on 429Any integration that makes more than ~10 requests/min30 minLow — standard pattern; may increase total wall-clock time for bulk jobs
Request queue with token-bucket throttleBulk ETL, sync jobs, or integrations that burst above 3 req/s2–4 hoursLow — adds complexity; rate is controlled before hitting the API
Notion SDK built-in retry (v2+)Greenfield projects using @notionhq/client ≥ 2.010 minLow — opinionated defaults; less tunable than custom logic
Caching API responses (read-only calls)Dashboards or read-heavy apps re-fetching the same pages1–2 hoursMedium — stale data risk; requires cache invalidation strategy
Splitting into multiple integration tokensVery high-volume pipelines exceeding one token's budget1 dayMedium-High — each token must have its own Notion integration with correct permissions
Idempotent retry on 502/503Any production integration30 minLow — essential hygiene; no downside if implemented correctly

Understanding Notion API Rate Limits and 502 Errors

The Notion API enforces two distinct failure modes that developers frequently conflate. Understanding which one you are hitting is the first diagnostic step.

HTTP 429 Too Many Requests means your integration has exceeded Notion's rate limit. As of 2024, Notion enforces approximately 3 requests per second per integration token. The response body looks like this:

{
  "object": "error",
  "status": 429,
  "code": "rate_limited",
  "message": "You have been rate limited. Please try again later."
}

The response also includes a Retry-After header (value in seconds) that you must read and honour. Hammering the API after a 429 without waiting will not help — Notion's infrastructure will continue rejecting requests and may extend the back-off window.

HTTP 502 Bad Gateway means a proxy or load balancer in front of Notion's servers received an invalid response from an upstream service. The body is usually minimal HTML or a JSON object with "status": 502. This is a transient server-side error. It is not caused by your request payload, credentials, or rate behaviour. It requires a retry, not a code change.


Step 1: Identify Which Error You Are Hitting

Run a minimal probe request and capture the full HTTP response:

curl -s -o /dev/null -w "%{http_code}" \
  -H "Authorization: Bearer $NOTION_TOKEN" \
  -H "Notion-Version: 2022-06-28" \
  "https://api.notion.com/v1/users/me"
  • Returns 200 → credentials are fine; your burst logic is the problem.
  • Returns 429 → you are actively rate-limited. Wait the Retry-After seconds before continuing.
  • Returns 502 → Notion is experiencing a transient fault. Check https://status.notion.so.
  • Returns 401 → wrong or expired token; unrelated to rate limits.

Step 2: Implement Exponential Backoff with Jitter

This single change resolves the majority of 429 and 502 incidents in production. The pattern is:

  1. Attempt the request.
  2. On 429, read the Retry-After header. Sleep for max(Retry-After, calculated_backoff) seconds.
  3. On 502 or 503, calculate backoff as min(base * 2^attempt, max_backoff) + random_jitter.
  4. Retry up to a configurable maximum (typically 5–7 attempts).
  5. After exhausting retries, raise the error to your caller with full context.

The JavaScript implementation using the official SDK is shown in the code_block section. In Python the pattern is identical — see the tenacity library for a decorator-based approach.


Step 3: Throttle Your Request Rate Client-Side

Backoff alone is reactive — it waits until you have already been rate-limited. A token-bucket or leaky-bucket throttle is proactive and eliminates 429s for predictable workloads.

For a simple Node.js ETL script processing thousands of Notion pages:

// Limit to 2.5 req/s to stay safely under the 3 req/s cap
const pLimit = require('p-limit');
const limit = pLimit(1); // 1 concurrent request at a time

async function sleep(ms) { return new Promise(r => setTimeout(r, ms)); }

const results = await Promise.all(
  pageIds.map(id =>
    limit(async () => {
      await sleep(400); // ~2.5 req/s
      return notion.pages.retrieve({ page_id: id });
    })
  )
);

For Python, use asyncio.Semaphore or the ratelimit package to achieve the same effect.


Step 4: Use the Official SDK's Built-In Retry (Node.js)

If you are using @notionhq/client version 2.0 or later, retry behaviour is available as a constructor option:

const { Client } = require('@notionhq/client');

const notion = new Client({
  auth: process.env.NOTION_TOKEN,
  timeoutMs: 30_000,
  // Built-in retry on 429 and network errors
  fetch: undefined, // uses default fetch
});

Note: as of SDK v2.2, the client does not automatically retry 429s by default — you must implement the retry wrapper yourself or use the notionhq/client APIResponseError class to detect and handle them in your own loop.


Step 5: Diagnose Persistent 502s

If you are receiving 502s consistently (more than 1 in 10 requests over 5+ minutes), do the following in order:

  1. Check Notion's status page: https://status.notion.so — ongoing incidents are listed here. If there is an active incident, no code change will help.
  2. Check your network path: If you are running behind a corporate proxy or a WAF (Cloudflare, AWS API Gateway), the 502 may originate from your own infrastructure, not Notion's.
  3. Inspect the response body: A Notion-origin 502 returns a JSON error object. A proxy-origin 502 returns HTML. Distinguish them:
curl -i -H "Authorization: Bearer $NOTION_TOKEN" \
     -H "Notion-Version: 2022-06-28" \
     "https://api.notion.com/v1/databases/YOUR_DATABASE_ID/query" \
     -d '{"page_size": 1}'

If the body starts with <!DOCTYPE html>, the 502 is coming from a proxy or CDN in front of Notion or in your egress path.

  1. Add request IDs to your logs: Include the x-notion-request-id response header in your error logs. If you need to escalate to Notion support, this ID lets them trace the specific request.

Step 6: Long-Term Architecture Improvements

Queue all write operations. Use a job queue (BullMQ, Celery, SQS) to process Notion writes asynchronously. This decouples your application's response time from Notion API availability and gives you natural rate control.

Cache read-heavy workloads. If your app reads the same Notion pages repeatedly (e.g., a published knowledge base), cache responses in Redis or an in-memory store with a TTL of 60–300 seconds. This can reduce your API call volume by 90%+ for read workloads.

Monitor your rate limit consumption. Log the x-notion-request-id and response status for every API call. Alert when your 429 rate exceeds 1% over a 5-minute window. This gives you early warning before user-facing failures.

Paginate correctly. Notion's list endpoints return has_more: true and a next_cursor when results are paginated. Fetching all pages in a tight loop is a common source of unexpected 429s. Always check has_more and add a small delay between pagination requests.

Frequently Asked Questions

javascript
// notion-safe-client.js
// Drop-in wrapper for @notionhq/client that handles 429 and 502 automatically

const { Client, APIResponseError } = require('@notionhq/client');

const RETRYABLE_STATUS_CODES = new Set([429, 502, 503, 504]);
const MAX_RETRIES = 6;
const BASE_DELAY_MS = 1000;
const MAX_DELAY_MS = 32000;

function jitter(ms) {
  return ms + Math.random() * ms * 0.2; // ±20% jitter
}

async function withRetry(fn, attempt = 0) {
  try {
    return await fn();
  } catch (err) {
    if (!(err instanceof APIResponseError)) throw err;

    const status = err.status;
    if (!RETRYABLE_STATUS_CODES.has(status) || attempt >= MAX_RETRIES) {
      console.error(`[notion] Non-retryable error ${status} after ${attempt} attempts:`, err.message);
      throw err;
    }

    let delayMs;
    if (status === 429) {
      // Honour Retry-After header if present
      const retryAfterSec = parseInt(err.headers?.['retry-after'] ?? '0', 10);
      delayMs = retryAfterSec > 0
        ? retryAfterSec * 1000
        : Math.min(BASE_DELAY_MS * 2 ** attempt, MAX_DELAY_MS);
    } else {
      // Exponential backoff for 502/503/504
      delayMs = Math.min(BASE_DELAY_MS * 2 ** attempt, MAX_DELAY_MS);
    }

    delayMs = jitter(delayMs);
    console.warn(`[notion] HTTP ${status} on attempt ${attempt + 1}. Retrying in ${Math.round(delayMs)}ms...`);
    await new Promise(r => setTimeout(r, delayMs));
    return withRetry(fn, attempt + 1);
  }
}

// Usage example:
const notion = new Client({ auth: process.env.NOTION_TOKEN });

async function safeDatabaseQuery(databaseId, filter) {
  return withRetry(() =>
    notion.databases.query({
      database_id: databaseId,
      filter,
      page_size: 100,
    })
  );
}

// Diagnostic: test your token and print rate-limit-relevant headers
async function diagnose() {
  const start = Date.now();
  try {
    const res = await notion.users.me();
    console.log('Token valid. Bot user:', res.id);
    console.log('Elapsed:', Date.now() - start, 'ms');
  } catch (err) {
    if (err instanceof APIResponseError) {
      console.error('Status:', err.status);
      console.error('Code:', err.code);
      console.error('Headers:', JSON.stringify(err.headers, null, 2));
      console.error('Request ID:', err.headers?.['x-notion-request-id']);
    } else {
      throw err;
    }
  }
}

module.exports = { withRetry, safeDatabaseQuery, diagnose };
E

Error Medic Editorial

The Error Medic Editorial team consists of senior DevOps and SRE engineers with experience operating high-volume API integrations across SaaS platforms. We write practical, command-driven troubleshooting guides grounded in production incident post-mortems.

Sources

Related Articles in Notion Api

Explore More API Errors Guides