Error Medic

Azure Functions Timeout: Fix 'Timeout value of 00:05:00 exceeded' and HTTP 429 Throttling

Fix Azure Functions timeout and HTTP 429 throttling errors. Configure functionTimeout in host.json, upgrade to Premium, dedicate storage, or use Durable Functio

Last updated:
Last verified:
2,428 words
Key Takeaways
  • Consumption plan hard-caps execution at 10 minutes — setting functionTimeout above '00:10:00' is silently ignored at runtime; migrate to Flex Consumption, Premium, or Dedicated for longer workloads
  • Azure Storage account throttling (HTTP 503) can crash the Functions host itself — isolate each high-volume app with a dedicated storage account
  • Queue and Service Bus message lock duration must equal or exceed functionTimeout, or timed-out functions will reprocess the same message indefinitely
  • Quick fix: set functionTimeout to '00:10:00' in host.json on Consumption, or '-1' for unlimited runtime on Premium and Dedicated plans
Fix Approaches Compared
MethodWhen to UseTime to ImplementRisk
Increase functionTimeout in host.jsonFunctions needing 5-10 min (Consumption) or > 30 min (Premium/Dedicated)< 5 minLow
Upgrade to Flex Consumption or Premium planFunctions requiring > 10 min runtime or burst scaling beyond Consumption limits30-60 minMedium — billing impact
Refactor to Durable FunctionsWorkflows with steps > 10 min each, fan-out/fan-in, or multi-day sagasHours to DaysMedium — significant code change
Reduce concurrency in host.jsonHTTP 429 or storage throttling caused by overwhelming downstream services< 10 minLow — may reduce throughput
Dedicated storage account per appStorage throttling under high-volume queue or blob trigger workloads15-30 minLow

Understanding Azure Functions Timeout and Throttling Errors

Azure Functions enforces strict execution time limits and resource constraints that vary by hosting plan. Exceeding these limits produces distinct error signatures in Application Insights, Azure Monitor logs, and the HTTP responses seen by callers. Understanding the exact error message and which plan you are on is the prerequisite to applying the correct fix.

Exact Error Messages to Look For

When a function invocation exceeds the configured functionTimeout, the host cancels execution and logs the following in Application Insights traces:

Microsoft.Azure.WebJobs.Host: Timeout value of 00:05:00 exceeded by function 'ProcessLargeDataset' (Id: 'a1b2c3d4...'). Initiating cancellation.

The HTTP caller receives a 500 response:

HTTP/1.1 500 Internal Server Error
{
  "id": "a1b2c3d4-e5f6-...",
  "statusCode": 500,
  "message": "The request timed out."
}

Throttling from the Functions host or downstream Azure services returns an HTTP 429:

HTTP/1.1 429 Too Many Requests
Retry-After: 30
{
  "error": {
    "code": "TooManyRequests",
    "message": "The server is currently receiving too many requests. Please retry after some time."
  }
}

Azure Storage account throttling — which affects the Functions host's own internal queue processing — appears in Application Insights as:

StorageException: The server is busy.
Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (503) Server Unavailable.
RequestId: 0001a2b3-0000-cc00-b63f-84710c7967bb

Timeout Limits by Hosting Plan

Before changing any configuration, confirm your plan's hard ceiling:

Hosting Plan Default Timeout Maximum Timeout
Consumption 5 min (300 s) 10 min (600 s) — hard cap
Flex Consumption 30 min Unlimited
Premium (EP1–EP3) 30 min Unlimited
Dedicated (App Service) 30 min Unlimited

The Consumption plan's 10-minute cap is enforced by the platform runtime. Setting functionTimeout to 00:15:00 in host.json on a Consumption plan is silently accepted by the configuration parser but ignored at runtime — the host still kills the function at 10 minutes. This is one of the most common sources of confusion.


Step 1: Diagnose the Root Cause

Do not guess. Run the following Kusto queries in Application Insights before touching any configuration.

Identify functions actively hitting the timeout:

exceptions
| where timestamp > ago(24h)
| where type == "Microsoft.Azure.WebJobs.Host.FunctionTimeoutException"
| summarize count() by bin(timestamp, 5m), outerMessage, cloud_RoleName
| order by timestamp desc

Profile average and p95 execution times to find slow functions:

traces
| where timestamp > ago(24h)
| where message startswith "Executed"
| parse message with "Executed '" functionName "'" * "Duration=" duration "ms)"
| summarize
    avg_ms = avg(toint(duration)),
    p95_ms = percentile(toint(duration), 95),
    max_ms = max(toint(duration)),
    calls  = count()
  by functionName
| order by p95_ms desc

Check for HTTP 429 throttling patterns:

requests
| where timestamp > ago(24h)
| where resultCode == "429"
| summarize count() by bin(timestamp, 5m), name, cloud_RoleName
| order by count_ desc

Inspect the current timeout via Azure CLI:

az functionapp config appsettings list \
  --name <YOUR_FUNCTION_APP> \
  --resource-group <YOUR_RG> \
  --query '[?name==`FUNCTIONS_EXTENSION_VERSION`]' \
  --output table

Step 2: Fix Timeout Issues

Fix A — Increase functionTimeout in host.json

This is the right move when your function needs 5–10 minutes on Consumption, or more than 30 minutes on a Premium or Dedicated plan.

{
  "version": "2.0",
  "functionTimeout": "00:10:00",
  "extensions": {
    "queues": {
      "visibilityTimeout": "00:10:00"
    },
    "serviceBus": {
      "maxAutoLockRenewalDuration": "00:10:00"
    }
  }
}

For Premium and Dedicated plans, use -1 for unlimited execution time:

{
  "version": "2.0",
  "functionTimeout": "-1"
}

Critical: Always align extensions.queues.visibilityTimeout and extensions.serviceBus.maxAutoLockRenewalDuration with functionTimeout. If the message lock expires before your function completes, the message becomes visible again and a second instance picks it up, causing duplicate processing.

Fix B — Upgrade to Flex Consumption or Premium Plan

When the 10-minute Consumption ceiling is genuinely the constraint, migrate the plan:

# Create an Elastic Premium plan
az functionapp plan create \
  --resource-group <YOUR_RG> \
  --name <PREMIUM_PLAN_NAME> \
  --location eastus \
  --sku EP1

# Move the function app to the premium plan
az functionapp update \
  --name <YOUR_FUNCTION_APP> \
  --resource-group <YOUR_RG> \
  --plan <PREMIUM_PLAN_NAME>

After migrating, redeploy with "functionTimeout": "-1" in host.json for unlimited execution.

Fix C — Refactor to Durable Functions

Durable Functions persist orchestration state to Azure Storage, enabling workflows that span hours or days. Each activity step is a short, independent invocation that never needs to exceed a few minutes.

// Orchestrator: stateful, survives host restarts, no single-invocation timeout concern
[FunctionName("ProcessDataOrchestrator")]
public static async Task<List<string>> RunOrchestrator(
    [OrchestrationTrigger] IDurableOrchestrationContext context)
{
    var batches = context.GetInput<List<BatchRequest>>();
    var tasks = batches.Select(b =>
        context.CallActivityAsync<string>("ProcessBatch", b));
    var results = await Task.WhenAll(tasks);
    return results.ToList();
}

// Activity: bounded steps — keep each one well under 5 minutes
[FunctionName("ProcessBatch")]
public static async Task<string> ProcessBatch(
    [ActivityTrigger] BatchRequest batch, ILogger log)
{
    log.LogInformation($"Processing batch {batch.Id}");
    await DoBatchWorkAsync(batch);
    return $"Batch {batch.Id} complete";
}

Durable Functions require the Microsoft.Azure.WebJobs.Extensions.DurableTask NuGet package and a storageProvider section in host.json.


Step 3: Fix Throttling Issues

Isolate the backing Azure Storage account

Every function app uses Azure Storage for internal queues and blob leases. High-throughput apps sharing a storage account will hit IOPS limits and throttle the host itself.

# Check which storage account backs your app
az functionapp config appsettings list \
  --name <YOUR_FUNCTION_APP> \
  --resource-group <YOUR_RG> \
  --query '[?name==`AzureWebJobsStorage`].value' \
  --output tsv

# Create a dedicated storage account
az storage account create \
  --name <DEDICATED_STORAGE_ACCT> \
  --resource-group <YOUR_RG> \
  --location eastus \
  --sku Standard_LRS \
  --kind StorageV2

# Retrieve the new connection string
CONN=$(az storage account show-connection-string \
  --name <DEDICATED_STORAGE_ACCT> \
  --resource-group <YOUR_RG> \
  --output tsv)

# Update the function app to use the dedicated account
az functionapp config appsettings set \
  --name <YOUR_FUNCTION_APP> \
  --resource-group <YOUR_RG> \
  --settings "AzureWebJobsStorage=$CONN"

Throttle concurrency in host.json to protect downstream services

{
  "version": "2.0",
  "extensions": {
    "queues": {
      "batchSize": 4,
      "newBatchThreshold": 2,
      "maxDequeueCount": 5,
      "maxPollingInterval": "00:00:15"
    },
    "serviceBus": {
      "maxConcurrentCalls": 4,
      "prefetchCount": 0,
      "maxAutoLockRenewalDuration": "00:10:00"
    },
    "http": {
      "routePrefix": "api",
      "maxOutstandingRequests": 200,
      "maxConcurrentRequests": 25,
      "dynamicThrottlesEnabled": true
    }
  }
}

Add an exponential backoff retry policy for transient 429 errors

Available in Functions runtime 3.x and later — add to host.json:

{
  "version": "2.0",
  "retry": {
    "strategy": "exponentialBackoff",
    "maxRetryCount": 5,
    "minimumInterval": "00:00:02",
    "maximumInterval": "00:01:00"
  }
}

Step 4: Set Up Monitoring and Alerting

Create an Azure Monitor alert so you catch regressions before users do:

az monitor metrics alert create \
  --name 'FunctionHighFailureRate' \
  --resource-group <YOUR_RG> \
  --scopes '/subscriptions/<SUB>/resourceGroups/<RG>/providers/Microsoft.Web/sites/<APP>' \
  --condition 'count FunctionExecutionCount where result == Failed > 50' \
  --window-size 5m \
  --evaluation-frequency 1m \
  --severity 2

KQL alert query to detect timeout rate exceeding 5 percent in a rolling 5-minute window:

requests
| where timestamp > ago(5m)
| summarize
    total     = count(),
    timed_out = countif(duration > 280000)
| extend timeout_pct = todouble(timed_out) / todouble(total) * 100
| where timeout_pct > 5

Step 5: Validate the Fix

After deploying configuration changes, confirm the improvement under real conditions:

# HTTP smoke test — print status code and total elapsed time
curl -s -o /dev/null \
  -w 'HTTP %{http_code}  duration: %{time_total}s\n' \
  'https://<YOUR_APP>.azurewebsites.net/api/<FUNCTION>?code=<KEY>'

# If you migrated to Durable Functions, list recent orchestration instances
curl -s \
  'https://<YOUR_APP>.azurewebsites.net/runtime/webhooks/durabletask/instances?code=<KEY>&top=10' \
  | python3 -m json.tool

Monitor Application Insights Live Metrics for 10–15 minutes after deployment to confirm the success rate has stabilized above 99 percent and average execution duration is comfortably within the new timeout budget.

Frequently Asked Questions

bash
#!/usr/bin/env bash
# Azure Functions Timeout & Throttling Diagnostic Script
# Prerequisites: Azure CLI installed and authenticated (az login)
# Usage: RG=my-resource-group APP=my-function-app bash diagnose-az-functions.sh

set -euo pipefail

RG="${RG:?ERROR: set RG to your resource group name}"
APP="${APP:?ERROR: set APP to your function app name}"

echo '=== 1. Hosting plan and SKU ==='
az functionapp show \
  --name "$APP" \
  --resource-group "$RG" \
  --query '{planId:appServicePlanId, kind:kind, state:state}' \
  --output json

echo ''
echo '=== 2. Key app settings ==='
az functionapp config appsettings list \
  --name "$APP" \
  --resource-group "$RG" \
  --query '[?name==`FUNCTIONS_EXTENSION_VERSION` || name==`AzureWebJobsStorage` || name==`APPLICATIONINSIGHTS_CONNECTION_STRING`]' \
  --output table

echo ''
echo '=== 3. Retrieve host.json via Kudu REST API ==='
PUBCREDS=$(az functionapp deployment list-publishing-credentials \
  --name "$APP" \
  --resource-group "$RG" \
  --query '{u:publishingUserName,p:publishingPassword}' \
  --output json)
KUDU_USER=$(echo "$PUBCREDS" | python3 -c 'import sys,json; print(json.load(sys.stdin)["u"])')
KUDU_PASS=$(echo "$PUBCREDS" | python3 -c 'import sys,json; print(json.load(sys.stdin)["p"])')
curl -s -u "${KUDU_USER}:${KUDU_PASS}" \
  "https://${APP}.scm.azurewebsites.net/api/vfs/site/wwwroot/host.json" \
  | python3 -m json.tool 2>/dev/null \
  || echo '[WARN] Could not retrieve host.json via Kudu'

echo ''
echo '=== 4. Application Insights: failures and timeouts (last 1h) ==='
AI_APP=$(az monitor app-insights component list \
  --resource-group "$RG" \
  --query '[0].appId' \
  --output tsv 2>/dev/null || echo '')

if [ -n "$AI_APP" ]; then
  echo '--- Top failing functions ---'
  az monitor app-insights query \
    --app "$AI_APP" \
    --analytics-query '
      requests
      | where timestamp > ago(1h)
      | summarize
          total   = count(),
          failed  = countif(success == false),
          slow    = countif(duration > 280000)
        by name
      | extend fail_pct = round(todouble(failed) / total * 100, 1)
      | order by failed desc
      | take 10
    ' \
    --output table

  echo ''
  echo '--- HTTP 429 throttling events ---'
  az monitor app-insights query \
    --app "$AI_APP" \
    --analytics-query '
      requests
      | where timestamp > ago(1h) and resultCode == "429"
      | summarize count() by bin(timestamp, 5m), name
      | order by timestamp desc
    ' \
    --output table
else
  echo '[WARN] No Application Insights component found in resource group.'
  echo '       Open Azure Portal -> Log Analytics and run the KQL queries manually.'
fi

echo ''
echo '=== 5. Recommended remediations ==='
echo 'Timeout on Consumption (need 5-10 min): set functionTimeout to 00:10:00 in host.json'
echo 'Timeout needing > 10 min:               upgrade plan to Premium; set functionTimeout to -1'
echo 'Duplicate message after timeout:        set queues.visibilityTimeout == functionTimeout'
echo 'Storage throttling (HTTP 503):          provision a dedicated storage account per function app'
echo 'HTTP 429 concurrency throttling:        lower http.maxConcurrentRequests in host.json'
echo 'Long-running workflows:                 refactor to Durable Functions activity steps'
E

Error Medic Editorial

The Error Medic Editorial team is composed of senior DevOps and SRE engineers with hands-on production experience operating Azure Functions workloads at scale. Their troubleshooting guides are grounded in real incident post-mortems, Azure Monitor telemetry analysis, and direct engagement with the Azure Functions open-source GitHub repository.

Sources

Related Articles in Azure Functions

Explore More Cloud Infrastructure Guides