Error Medic

Consul Connection Refused: Fixing 'dial tcp 127.0.0.1:8500: connect: connection refused'

Fix Consul connection refused errors: verify agent status, bind address, firewall rules, and CONSUL_HTTP_ADDR. Step-by-step diagnostic commands included.

Last updated:
Last verified:
2,795 words
Key Takeaways
  • Consul agent is stopped or crashed — verify with 'systemctl status consul' and restart if needed
  • Agent bound to wrong interface: client_addr set to 127.0.0.1 blocks all non-loopback connections on port 8500
  • Port 8500 blocked by iptables/firewalld or cloud security group — a TCP REJECT sends connection refused while DROP causes timeouts
  • CONSUL_HTTP_ADDR environment variable set to stale IP, wrong port, or missing from systemd EnvironmentFile
  • TLS mismatch: server requires HTTPS on port 8501 but client connects via plain HTTP on 8500
  • Quick fix: 'sudo systemctl restart consul', confirm with 'ss -tlnp | grep 8500', validate with 'curl http://127.0.0.1:8500/v1/status/leader'
Fix Approaches Compared
MethodWhen to UseTimeRisk
Restart consul systemd serviceAgent stopped, crashed, or in failed state< 1 minLow — service restart only
Fix client_addr in consul.hclAgent running but unreachable from remote host or container5–10 minLow — requires agent reload
Open firewall port 8500Remote host gets connection refused or packet trace shows REJECT2–5 minMedium — expands network exposure
Correct CONSUL_HTTP_ADDRCLI or SDK fails but curl to localhost works1 minLow — environment variable only
Switch client to HTTPS port 8501TLS enabled on server, plain HTTP returns EOF or empty reply10–20 minLow-Medium — cert management required
Fix Docker/K8s networkingContainerized app targeting 127.0.0.1 of container loopback10–30 minLow — no code changes, config only

Understanding the Error

When you encounter consul connection refused, the full error message typically looks like one of these:

Get "http://127.0.0.1:8500/v1/status/leader": dial tcp 127.0.0.1:8500: connect: connection refused
Error querying Consul agent: Get "http://127.0.0.1:8500/v1/agent/self": dial tcp 127.0.0.1:8500: connect: connection refused
Error making API request. URL: PUT http://127.0.0.1:8500/v1/kv/mykey. Code: 0. Errors:
  connection refused

Port 8500 is Consul's default HTTP API port. Connection refused means the OS received your TCP SYN packet but no process was listening on that port. This is fundamentally different from a timeout: a timeout means packets are being silently dropped (firewall DROP rules), while connection refused means the port is network-reachable but nothing is accepting connections. This distinction is your first diagnostic clue — start by checking whether the Consul process is running before investigating firewalls.

The five root causes, in order of frequency:

  1. Consul agent is stopped or has crashed
  2. Agent started but bound to a different interface or port than expected
  3. CONSUL_HTTP_ADDR environment variable is set incorrectly
  4. A firewall or cloud security group is rejecting connections to port 8500
  5. TLS/HTTPS mismatch between client and server

Step 1: Confirm the Consul Agent Is Running

Start here. Nine out of ten connection refused cases resolve at this step.

# Check systemd service status
sudo systemctl status consul

# If not using systemd, look for the process directly
ps aux | grep consul | grep -v grep

# Confirm what is listening on port 8500
ss -tlnp | grep 8500

If systemctl status consul shows inactive (dead) or failed, restart and inspect logs:

sudo systemctl restart consul
sudo journalctl -u consul -n 100 --no-pager

Common startup failure messages to look for in the journal:

  • Failed to get advertise addressadvertise_addr is unreachable or not set, common on multi-homed hosts
  • bind: address already in use — another process holds port 8500; find it with ss -tlnp | grep 8500
  • Error loading TLS config — certificate file missing or path incorrect in configuration
  • permission denied — the consul user cannot read the config directory or data directory
  • No cluster leader — agent started but quorum not formed; this is a cluster issue, not an API issue

If the agent starts but immediately exits, run it in the foreground to capture the full failure output:

sudo -u consul consul agent -config-dir=/etc/consul.d -log-level=debug 2>&1 | head -60

Step 2: Verify Bind Address and Port Configuration

Even when Consul is running, it may be listening on a different interface than you are connecting to. Verify what address it has actually bound:

# See which address:port consul has bound
ss -tlnp | grep consul

# Test connectivity across all common variants
curl -sf http://127.0.0.1:8500/v1/status/leader && echo 'OK: loopback'
curl -sf http://$(hostname -I | awk '{print $1}'):8500/v1/status/leader && echo 'OK: primary IP'

In Consul's configuration file (typically /etc/consul.d/consul.hcl), the relevant settings are:

bind_addr   = "0.0.0.0"   # Interfaces for Serf and server RPC
client_addr = "127.0.0.1" # Interfaces for HTTP, HTTPS, and DNS APIs

ports {
  http  = 8500
  https = 8501
  dns   = 8600
  grpc  = 8502
}

The most common misconfiguration: client_addr = "127.0.0.1" restricts the HTTP API to the loopback interface only. Any connection from another host, a Docker container on a bridge network, or a Kubernetes pod will be refused. Change it to client_addr = "0.0.0.0" to accept connections on all interfaces, then use your firewall to restrict access.

To inspect the running configuration without restarting:

consul info | grep -i addr

# Or query the API directly (only works if already reachable at loopback)
curl -s http://127.0.0.1:8500/v1/agent/self | python3 -m json.tool | grep -A10 Config

After editing consul.hcl, reload the agent without a full restart:

sudo systemctl reload consul
# or send SIGHUP directly
sudo kill -HUP $(pgrep consul)

Step 3: Check CONSUL_HTTP_ADDR and Related Environment Variables

Many tools — Terraform, Vault, Nomad, and custom automation scripts — read CONSUL_HTTP_ADDR to determine where to connect. An incorrect value silently redirects every request to a dead endpoint:

# Check current shell environment
echo $CONSUL_HTTP_ADDR
env | grep CONSUL

If CONSUL_HTTP_ADDR is unset, clients default to http://127.0.0.1:8500. If it points to a decommissioned IP, wrong port, or an unresolvable hostname, you will get connection refused on every call. Correct it:

# Plain HTTP cluster
export CONSUL_HTTP_ADDR='http://127.0.0.1:8500'

# TLS-enabled cluster
export CONSUL_HTTP_ADDR='https://consul.example.com:8501'
export CONSUL_HTTP_SSL='true'
export CONSUL_HTTP_SSL_VERIFY='true'
export CONSUL_CACERT='/etc/consul.d/ca.pem'

Critical note for systemd services: environment variables set in your shell are not inherited by systemd units. Place them in a persistent environment file:

# /etc/consul.d/consul-client.env
CONSUL_HTTP_ADDR=http://127.0.0.1:8500

# Reference this file in your service unit:
# [Service]
# EnvironmentFile=/etc/consul.d/consul-client.env

Step 4: Inspect Firewall and Security Group Rules

When Consul is running and bound correctly on its host but a remote client gets connection refused, a firewall sending explicit TCP RESET packets is likely the cause. On Linux:

# List all iptables INPUT rules
sudo iptables -L INPUT -n --line-numbers

# On firewalld systems
sudo firewall-cmd --list-all

# Open port 8500 with firewalld
sudo firewall-cmd --permanent --add-port=8500/tcp
sudo firewall-cmd --reload

# Or with raw iptables (prepend to INPUT chain)
sudo iptables -I INPUT 1 -p tcp --dport 8500 -j ACCEPT
sudo iptables-save | sudo tee /etc/iptables/rules.v4

For cloud-hosted instances, check the cloud-level controls in addition to OS-level firewalls:

  • AWS EC2: Security Group inbound rules must allow TCP 8500 from your source CIDR. Security Groups are stateful.
  • GCP: VPC Firewall Rules must target the instance's network tag and allow TCP 8500. Check both allow and deny rules.
  • Azure: Network Security Group inbound rules must allow TCP 8500 with a priority lower than any DENY rules.

For multi-node Consul clusters, also ensure these ports are open between all cluster nodes:

  • 8300/tcp — server RPC (client-to-server and server-to-server)
  • 8301/tcp+udp — Serf LAN gossip (all agents)
  • 8302/tcp+udp — Serf WAN gossip (cross-datacenter only)

Blocking ports 8300 or 8301 does not directly cause port 8500 connection refused, but it prevents cluster formation. Without a leader, secondary node APIs return errors that application code often surfaces as connection refused.


Step 5: Diagnose TLS Configuration Mismatches

With TLS enabled on the Consul server, connecting via plain HTTP on port 8500 produces misleading error messages:

curl: (52) Empty reply from server

or:

Get "http://127.0.0.1:8500/v1/status/leader": EOF

Some client libraries surface these as connection refused. Check your TLS configuration:

# Scan for TLS-related directives
grep -rE 'verify_incoming|verify_outgoing|tls|https' /etc/consul.d/

# Attempt connection explicitly over HTTPS
curl --cacert /etc/consul.d/ca.pem https://127.0.0.1:8501/v1/status/leader

Consul 1.12+ uses a dedicated tls configuration block:

tls {
  defaults {
    ca_file   = "/etc/consul.d/ca.pem"
    cert_file = "/etc/consul.d/server.pem"
    key_file  = "/etc/consul.d/server-key.pem"
    verify_incoming = true
    verify_outgoing = true
  }
}

When verify_incoming = true is set on the HTTP listener, the server requires mutual TLS and immediately closes plain connections, which client libraries report as connection refused or EOF.


Step 6: Docker and Kubernetes Environments

In containerized environments, the single most common mistake is using 127.0.0.1 to reach Consul from inside a container. The container's loopback is not the host loopback.

Docker — find the actual reachable address:

# Inspect Consul container IP
docker inspect consul | grep IPAddress

# Test from inside your application container
docker exec -it your_app curl http://consul:8500/v1/status/leader

# If Consul runs on the host with host networking, use bridge gateway
docker run --network host hashicorp/consul agent -dev
# From bridge containers: CONSUL_HTTP_ADDR=http://172.17.0.1:8500

Docker Compose — use service names:

services:
  consul:
    image: hashicorp/consul:latest
    ports:
      - "8500:8500"
  app:
    environment:
      CONSUL_HTTP_ADDR: http://consul:8500
    depends_on:
      - consul

Kubernetes — use the ClusterIP service DNS name:

export CONSUL_HTTP_ADDR='http://consul.consul.svc.cluster.local:8500'

# Verify the service and its endpoints exist
kubectl get svc consul -n consul
kubectl get endpoints consul -n consul

If kubectl get endpoints shows <none>, the Consul pods are not running or the service's label selector does not match the pod labels. Fix the pod issue first — the connection refused in your application is a downstream symptom.


Step 7: ACL Bootstrap and Token Edge Cases

In Consul 1.4+, ACL enforcement is available and enabled by default in some configurations. A failed ACL bootstrap can prevent the agent from completing startup, causing indirect connection refused:

# Test without token to separate connectivity from authorization
unset CONSUL_HTTP_TOKEN
curl http://127.0.0.1:8500/v1/status/leader

# GET /v1/status/leader never requires an ACL token
# If this succeeds, you have connectivity — the issue is authorization, not networking

# Check ACL bootstrap status
consul acl bootstrap
# Expected if already bootstrapped: 'ACL bootstrap no longer allowed'

Note that GET /v1/status/leader succeeds even with ACLs enabled and no token provided. Use it as a pure connectivity health check to isolate networking problems from authorization problems.

Frequently Asked Questions

bash
#!/usr/bin/env bash
# consul-diagnose.sh — Consul Connection Refused Diagnostic
# Usage: sudo bash consul-diagnose.sh

CONSUL_ADDR="${CONSUL_HTTP_ADDR:-http://127.0.0.1:8500}"

echo '=== Consul Connection Refused Diagnostic ==='
echo "Target: $CONSUL_ADDR"
echo ''

# ── 1. Process check ──────────────────────────────────────────────────────────
echo '--- 1. Process Check ---'
if pgrep -x consul > /dev/null 2>&1; then
  echo "[OK]   consul is running (PID: $(pgrep -x consul | tr '\n' ' '))"
else
  echo '[FAIL] consul process is NOT running'
  echo '       Fix: sudo systemctl start consul'
fi

# ── 2. Port listener ─────────────────────────────────────────────────────────
echo ''
echo '--- 2. Port 8500 Listener ---'
LISTENER=$(ss -tlnp 2>/dev/null | grep ':8500' || true)
if [ -n "$LISTENER" ]; then
  echo "[OK]   $LISTENER"
else
  echo '[FAIL] Nothing is listening on port 8500'
fi

# ── 3. Systemd service status ────────────────────────────────────────────────
echo ''
echo '--- 3. Systemd Service Status ---'
if command -v systemctl > /dev/null 2>&1; then
  ACTIVE=$(systemctl is-active consul 2>/dev/null || true)
  ENABLED=$(systemctl is-enabled consul 2>/dev/null || true)
  [ "$ACTIVE" = 'active' ]   && echo "[OK]   consul.service is active"   || echo "[FAIL] consul.service is $ACTIVE"
  [ "$ENABLED" = 'enabled' ] && echo "[OK]   consul.service is enabled"  || echo "[WARN] consul.service is $ENABLED (won't start on boot)"
else
  echo '[INFO] systemd not available'
fi

# ── 4. Connectivity tests ────────────────────────────────────────────────────
echo ''
echo '--- 4. HTTP API Connectivity ---'
for ADDR in 'http://127.0.0.1:8500' 'http://0.0.0.0:8500' "http://$(hostname -I | awk '{print $1}'):8500"; do
  if curl -sf --max-time 3 "${ADDR}/v1/status/leader" > /dev/null 2>&1; then
    echo "[OK]   Reachable: $ADDR"
  else
    echo "[FAIL] Not reachable: $ADDR"
  fi
done

# ── 5. Environment variables ─────────────────────────────────────────────────
echo ''
echo '--- 5. CONSUL_* Environment Variables ---'
echo "CONSUL_HTTP_ADDR      = ${CONSUL_HTTP_ADDR:-<unset — default: http://127.0.0.1:8500>}"
echo "CONSUL_HTTP_SSL       = ${CONSUL_HTTP_SSL:-<unset>}"
echo "CONSUL_CACERT         = ${CONSUL_CACERT:-<unset>}"
if [ -n "${CONSUL_HTTP_TOKEN:-}" ]; then
  echo 'CONSUL_HTTP_TOKEN     = <set>'
else
  echo 'CONSUL_HTTP_TOKEN     = <unset>'
fi

# ── 6. Firewall check ────────────────────────────────────────────────────────
echo ''
echo '--- 6. Firewall (iptables) ---'
if command -v iptables > /dev/null 2>&1; then
  REJECT_COUNT=$(sudo iptables -L INPUT -n 2>/dev/null | grep -cE '(REJECT|DROP)' || true)
  PORT_RULE=$(sudo iptables -L INPUT -n 2>/dev/null | grep '8500' || true)
  if [ -n "$PORT_RULE" ]; then
    echo "[INFO] iptables rules matching 8500: $PORT_RULE"
  else
    echo "[OK]   No explicit iptables rules for port 8500"
  fi
  echo "       Total INPUT REJECT/DROP rules: $REJECT_COUNT"
fi

# ── 7. Consul configuration ──────────────────────────────────────────────────
echo ''
echo '--- 7. Consul Configuration ---'
if [ -d /etc/consul.d ]; then
  CLIENT=$(grep -rh 'client_addr' /etc/consul.d/ 2>/dev/null | head -1 || echo '<not set — default: 127.0.0.1>')
  BIND=$(grep -rh 'bind_addr'   /etc/consul.d/ 2>/dev/null | head -1 || echo '<not set>')
  TLS=$(grep -rlE 'verify_incoming|verify_outgoing|tls \{' /etc/consul.d/ 2>/dev/null | head -1 || echo '<none found>')
  echo "client_addr:  $CLIENT"
  echo "bind_addr:    $BIND"
  echo "TLS config file: $TLS"
else
  echo '[WARN] /etc/consul.d not found'
fi

# ── 8. Recent logs ───────────────────────────────────────────────────────────
echo ''
echo '--- 8. Recent Consul Logs ---'
if command -v journalctl > /dev/null 2>&1; then
  sudo journalctl -u consul -n 20 --no-pager 2>/dev/null || echo '[INFO] No journald logs for consul'
fi

echo ''
echo '=== Diagnostic Complete ==='
echo 'Next steps based on findings:'
echo '  Process not running   -> sudo systemctl restart consul'
echo '  Listening on 127.0.0.1 only -> set client_addr = "0.0.0.0" in consul.hcl'
echo '  No process, port open -> another service owns 8500, check ss -tlnp'
echo '  TLS config found      -> use CONSUL_HTTP_ADDR=https://host:8501'
E

Error Medic Editorial

The Error Medic Editorial team is composed of senior DevOps engineers and SREs with hands-on experience operating distributed systems at scale. Our troubleshooting guides are derived from real incident post-mortems and production debugging sessions covering Consul, Vault, Nomad, Kubernetes, and the broader service mesh ecosystem.

Sources

Related Articles in Consul

Explore More DevOps Config Guides