Troubleshooting GitHub API Rate Limits: Fixing 401, 403, 429, and Timeout Errors
Comprehensive guide to fixing GitHub API rate limits, 401 Unauthorized, 403 Forbidden, 429 Too Many Requests, and 502 timeouts with backoff strategies.
- Unauthenticated requests are strictly limited to 60 requests per hour, quickly leading to 403 Forbidden errors.
- Secondary rate limits (429 Too Many Requests) are triggered by high concurrency, rapid mutations, or expensive queries even if your primary limit is not exhausted.
- Quick Fix: Authenticate using a Personal Access Token (PAT) or GitHub App to increase limits to 5,000 or 15,000 requests per hour, and parse the 'x-ratelimit-reset' header to implement intelligent retry backoff.
| Method | When to Use | Time | Risk |
|---|---|---|---|
| Authenticate with PAT | When currently unauthenticated (increases limit from 60 to 5,000 req/hr) | 5 mins | Low |
| Use GitHub App Token | For CI/CD and enterprise production tools (15,000+ req/hr) | 30 mins | Low |
| Implement Exponential Backoff | When hitting 403/429 secondary limits due to high concurrency | 1-2 hours | Medium |
| GraphQL API Migration | When making excessive REST calls to fetch nested relational data | 1-2 weeks | High |
Understanding GitHub API Errors
When building integrations, CI/CD pipelines, or automation scripts that interact with GitHub, encountering API errors is a rite of passage. GitHub strictly protects its infrastructure by enforcing rigorous rate limits and authentication requirements. If your application starts failing with 401 Unauthorized, 403 Forbidden, 429 Too Many Requests, 502 Bad Gateway, or timeout errors, you are likely running afoul of these protections.
Understanding the nuanced differences between these HTTP status codes is critical for implementing the correct fix. A 403 error might mean you have hit a hard rate limit, but it could also mean your token lacks the correct scopes. A 429 error specifically indicates you are triggering secondary abuse-prevention limits, often due to high concurrency. Timeouts and 502s usually point to fetching payloads that are simply too large.
Let us break down each error, how to diagnose it, and the precise steps required to restore your integration to a healthy state.
Diagnosing the Exact Error
The first step in troubleshooting is to inspect the HTTP response headers returned by the GitHub API. GitHub provides crucial diagnostic information in these headers, specifically the x-ratelimit-* suite of headers.
When you receive an error, log or curl the endpoint with the -i flag to inspect the headers:
curl -i -H "Authorization: token YOUR_TOKEN" https://api.github.com/users/octocat
Pay close attention to:
x-ratelimit-limit: The maximum number of requests you are permitted to make per hour.x-ratelimit-remaining: The number of requests remaining in the current rate limit window.x-ratelimit-reset: The time at which the current rate limit window resets in UTC epoch seconds.retry-after: If present, the number of seconds you must wait before making another request (typically seen with 429 or secondary 403 limits).
Error 1: 401 Unauthorized (Bad Credentials)
The Symptom
You receive a response like this:
{
"message": "Bad credentials",
"documentation_url": "https://docs.github.com/rest"
}
The Root Cause
A 401 Unauthorized error means your request failed authentication. This happens when:
- You are not sending an
Authorizationheader at all, and the endpoint requires it. - The token you provided is invalid, expired, or has been revoked.
- The token is incorrectly formatted in the header (e.g., missing the
Bearerortokenkeyword).
The Fix
- Verify Token Validity: Ensure your Personal Access Token (PAT) or GitHub App token is active. If using fine-grained PATs, verify they haven't expired.
- Check Header Formatting: Ensure your request header looks exactly like this:
Authorization: Bearer YOUR_TOKENorAuthorization: token YOUR_TOKEN. - SSO Authorization: If the repository belongs to an organization enforcing SAML SSO, you must explicitly authorize your PAT for use with that specific organization in your GitHub settings.
Error 2: 403 Forbidden (Primary Rate Limit Exceeded)
The Symptom
You receive a response like this:
{
"message": "API rate limit exceeded for 192.0.2.1. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"documentation_url": "https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting"
}
The Root Cause
GitHub imposes a strict limit of 60 requests per hour for unauthenticated requests, associated with the originating IP address. If you are running a script from a CI runner (like GitHub Actions or AWS CodeBuild) without authentication, you share this limit with all other tenants on that IP, meaning you will exhaust it almost instantly.
For authenticated requests using a PAT or OAuth token, the primary limit is 5,000 requests per hour. For GitHub App installations, it scales up to 15,000 requests per hour.
The Fix
- Authenticate: If you are unauthenticated, generate a read-only PAT and include it in your requests. This instantly bumps your limit from 60 to 5,000.
- Use GitHub Apps: If you are building a production enterprise tool, migrate from a PAT to a GitHub App. GitHub Apps benefit from higher rate limits that scale with the number of repositories and users.
- Respect the Reset Time: If your authenticated limit is exhausted, you must wait. Read the
x-ratelimit-resetheader, convert the epoch timestamp to local time, and pause your scripts until that exact second passes.
Error 3: 429 Too Many Requests (Secondary Rate Limits)
The Symptom
You receive a 429 Too Many Requests (or sometimes a 403 Forbidden with a specific abuse message) and a retry-after header.
{
"message": "You have exceeded a secondary rate limit. Please wait a few minutes before you try again.",
"documentation_url": "https://docs.github.com/rest/guides/best-practices-for-integrators#dealing-with-secondary-rate-limits"
}
The Root Cause
Secondary rate limits are abuse-prevention mechanisms. Even if you have 4,000 requests remaining in your primary bucket, GitHub will block you if you:
- Make too many concurrent requests (e.g., firing off 100 parallel API calls).
- Make requests that require heavy CPU on GitHub's side (e.g., searching code, fetching large dependency graphs, creating multiple issues rapidly).
- Exceed the mutation limit (creating/updating resources too quickly).
The Fix
Secondary limits require architectural changes to your integration:
- Throttle Concurrency: Never make concurrent requests to the GitHub API. Run your API calls sequentially.
- Implement Delays: Add a standard
sleep(1)between mutation requests (POST/PATCH/PUT/DELETE). - Listen to
retry-after: This is the most critical step. If you receive a 429 or 403 with aretry-afterheader, you MUST suspend all requests for the number of seconds specified in that header. Ignoring this and continuing to hammer the API can result in a temporary shadowban. - Implement Exponential Backoff: Wrap your API calls in a retry logic that uses exponential backoff with jitter. If a request fails, wait 1s, then 2s, then 4s, etc., up to a maximum threshold.
Error 4: 502 Bad Gateway and Timeouts
The Symptom
Your client throws an ECONNRESET, a read timeout exception, or you receive a 502 Bad Gateway HTML response.
The Root Cause
Timeouts typically occur when requesting enormous datasets. Common culprits include:
- Fetching a repository with hundreds of thousands of commits using the
/commitsendpoint without pagination. - Requesting a massive release asset.
- Complex search queries on large codebases.
The Fix
- Aggressive Pagination: Always use pagination (
?per_page=100&page=1). Do not attempt to fetch everything at once. - Use GraphQL for Targeted Data: If you are making multiple REST calls to gather nested data (e.g., getting a PR, then getting its commits, then getting the authors), switch to the GitHub GraphQL API. GraphQL allows you to fetch exactly the data you need in a single, efficient query, which prevents timeouts and saves your rate limit.
- Increase Client Timeout: If running within a CI/CD pipeline, ensure your HTTP client's timeout setting is generous enough (e.g., 30-60 seconds) to account for intermittent network latency to GitHub's servers.
Best Practices for Long-Term Resilience
To prevent these issues from recurring, implement the following architectural patterns:
- Conditional Requests: Always use
ETagandIf-None-MatchorLast-ModifiedandIf-Modified-Sinceheaders. If the resource hasn't changed, GitHub returns a304 Not Modified. 304 responses do not count against your rate limit. - Webhooks over Polling: Never poll the REST API to check for updates. Configure GitHub Webhooks to push events to your servers immediately when changes occur.
- Caching Layer: Implement a local caching layer (like Redis or Memcached) to store API responses for non-volatile data (like repository metadata or user profiles).
By systematically inspecting headers, strictly adhering to authentication best practices, and implementing robust backoff mechanisms, you can completely eliminate GitHub API rate limit disruptions from your infrastructure.
Frequently Asked Questions
#!/bin/bash
# Diagnostic script to check GitHub API Rate Limit status
# Requirements: curl, jq
TOKEN="YOUR_PERSONAL_ACCESS_TOKEN"
echo "Fetching GitHub API Rate Limit Status..."
curl -s -w "\nHTTP_STATUS:%{http_code}\n" \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: Bearer $TOKEN" \
https://api.github.com/rate_limit > response_temp.txt
# Extract status code and body
HTTP_STATUS=$(grep "HTTP_STATUS:" response_temp.txt | cut -d':' -f2)
sed -i '' '/HTTP_STATUS:/d' response_temp.txt 2>/dev/null || sed -i '/HTTP_STATUS:/d' response_temp.txt
if [ "$HTTP_STATUS" -eq 200 ]; then
cat response_temp.txt | jq '{core_limit: .resources.core.limit, remaining: .resources.core.remaining, reset_time_utc: (.resources.core.reset | todate)}'
else
echo "Failed to fetch limits. HTTP Status: $HTTP_STATUS"
cat response_temp.txt | jq .
fi
rm response_temp.txtError Medic Editorial
Expert SREs and DevOps practitioners sharing real-world solutions for infrastructure, CI/CD, and platform engineering challenges. We focus on practical, actionable advice to keep your pipelines green.