How to Fix Notion API Rate Limit Exceeded (HTTP 429) and 502 Bad Gateway Errors
Resolve Notion API HTTP 429 Too Many Requests and 502 Bad Gateway errors by implementing exponential backoff, retry logic, and optimizing database queries.
- HTTP 429 Too Many Requests means you exceeded Notion's rate limit of 3 requests per second per integration.
- HTTP 502 Bad Gateway often indicates server-side timeouts caused by complex queries or fetching excessively large databases.
- Always respect the Retry-After header for 429s, and implement exponential backoff with jitter to handle 502 server errors reliably.
- Proactively prevent errors by using concurrency limiters in your application and paginating API requests appropriately.
| Method | When to Use | Implementation Time | Risk |
|---|---|---|---|
| Exponential Backoff & Retries | Handling 429 and 5xx API errors gracefully | Medium | Low |
| Concurrency Limiting (Throttling) | Preventing 429s during bulk inserts or updates | Low | Low |
| Query Optimization & Pagination | Reducing 502s when querying large databases | Medium | Medium |
Understanding the Error
When integrating with the Notion API, developers frequently encounter two disruptive HTTP status codes: 429 Too Many Requests and 502 Bad Gateway. These errors can halt data synchronization pipelines, break automated workflows, and degrade the user experience of your internal tools.
What is a 429 Rate Limit Exceeded?
A 429 Too Many Requests response indicates that your application has exceeded Notion's predefined rate limits. Notion strictly enforces an average limit of 3 requests per second per integration. While small bursts are sometimes tolerated, sustained traffic above this threshold will inevitably trigger a 429 error. The API response typically includes a Retry-After header, which specifies the number of seconds your application must wait before making another request.
What causes a 502 Bad Gateway?
A 502 Bad Gateway error means that the server acting as a gateway or proxy received an invalid response from the upstream server. In the context of the Notion API, this usually occurs when a request takes too long to process. This is common when executing complex filters on large databases, fetching pages with deep block hierarchies, or during periods of general platform degradation. Unlike 429s, 502s do not include a Retry-After header and indicate server-side strain rather than a strict client-side limit violation.
Step 1: Diagnose
Before applying a fix, you must determine which error is causing your pipeline to fail and under what conditions.
- Inspect Response Headers: For 429 errors, always check the response headers for
Retry-After. This value is critical for implementing an efficient delay mechanism. - Analyze Request Patterns: Are you firing requests concurrently? If you are using
Promise.all()in Node.js orasyncio.gather()in Python to update multiple Notion pages simultaneously, you are likely hitting the 3 requests/second limit instantly. - Evaluate Query Complexity: For 502 errors, review the specific endpoint failing. Is it a database query with multiple nested
AND/ORfilters? Are you trying to retrieve a database with tens of thousands of rows without appropriate pagination?
Step 2: Fix
Addressing these errors requires a combination of architectural adjustments and defensive programming techniques.
Implement Exponential Backoff with Jitter
The most robust solution for handling both 429 and 502 errors is implementing an exponential backoff strategy with jitter. When a request fails, your application should wait for a short period before retrying. If the retry fails, the wait time increases exponentially (e.g., 2s, 4s, 8s). Adding "jitter" (a random variation to the delay) prevents the "thundering herd" problem, where multiple failing concurrent requests retry at the exact same millisecond.
If you receive a 429 error and the Retry-After header is present, you should prioritize that specific delay over your calculated exponential backoff.
Control Concurrency Levels
Do not rely solely on retries to handle rate limits; proactively manage your request velocity.
- Node.js: Use a concurrency-limiting library like
p-limitorbottleneckto restrict the number of active API calls to 2 or 3 at any given time. - Python: Use an
asyncio.Semaphoreor a worker queue to throttle requests.
Optimize and Paginate Queries
To mitigate 502 Bad Gateway errors:
- Always use pagination when querying databases or fetching block children. Process data in chunks (Notion's default is 100 items per page).
- Simplify your database filters. Move complex filtering logic to your application layer if the API struggles to evaluate it efficiently.
- Avoid requesting full page content if you only need properties.
By combining strict concurrency control with intelligent retry logic, you can build a resilient Notion integration that seamlessly handles rate limits and transient server errors.
Frequently Asked Questions
import requests
import time
import logging
from tenacity import retry, wait_exponential, stop_after_attempt, retry_if_exception_type
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class NotionRateLimitException(Exception):
pass
class NotionServerErrorException(Exception):
pass
def handle_notion_response(response):
# Handle 429 Too Many Requests
if response.status_code == 429:
retry_after = int(response.headers.get("Retry-After", 1))
logger.warning(f"Rate limited (429). Retrying after {retry_after} seconds.")
time.sleep(retry_after) # Prioritize the API's requested delay
raise NotionRateLimitException("Rate limit exceeded")
# Handle 5xx Server Errors
elif response.status_code in [500, 502, 503, 504]:
logger.warning(f"Server error ({response.status_code}). Retrying with backoff.")
raise NotionServerErrorException(f"Server error: {response.status_code}")
response.raise_for_status()
return response.json()
# Apply exponential backoff with jitter for 5xx errors and fallback for 429s
@retry(
wait=wait_exponential(multiplier=1, min=2, max=10),
stop=stop_after_attempt(5),
retry=(retry_if_exception_type(NotionRateLimitException) | retry_if_exception_type(NotionServerErrorException))
)
def safe_notion_query(database_id, headers, payload=None):
url = f"https://api.notion.com/v1/databases/{database_id}/query"
response = requests.post(url, headers=headers, json=payload or {})
return handle_notion_response(response)
# Example Usage
if __name__ == "__main__":
NOTION_TOKEN = "your_integration_token_here"
DB_ID = "your_database_id_here"
headers = {
"Authorization": f"Bearer {NOTION_TOKEN}",
"Notion-Version": "2022-06-28",
"Content-Type": "application/json"
}
try:
# Ensure you implement pagination in your payload for large databases
payload = {"page_size": 100}
data = safe_notion_query(DB_ID, headers, payload)
print(f"Successfully retrieved {len(data.get('results', []))} records.")
except Exception as e:
logger.error(f"Failed to query Notion after retries: {e}")Error Medic Editorial
Error Medic Editorial is a specialized team of senior DevOps, SRE professionals, and software engineers dedicated to providing actionable, in-depth troubleshooting guides for modern APIs and cloud infrastructure.