Error Medic

How to Fix 'docker permission denied' and Docker Daemon Crashes

Comprehensive guide to fix 'docker permission denied', connection refused, OOM crashes, and disk full errors. Actionable steps to restore your Docker environmen

Last updated:
Last verified:
2,122 words
Key Takeaways
  • Root Cause 1: Your Linux user lacks the required permissions to access the Docker Unix socket (/var/run/docker.sock), leading to 'permission denied'.
  • Root Cause 2: The Docker daemon itself has failed, stopped, or crashed, resulting in 'connection refused' or 'docker not working' symptoms.
  • Root Cause 3: Resource exhaustion, such as 'docker no space left' on the disk or 'docker out of memory' (OOM), forcing the daemon or containers to crash.
  • Quick Fix Summary: Run 'sudo usermod -aG docker $USER' followed by 'newgrp docker' to fix socket permissions. Use 'systemctl status docker' to check if the daemon is running.
Fix Approaches Compared for Docker Access and Daemon Issues
MethodWhen to UseTimeRisk
Add user to 'docker' groupBest practice for permanent, passwordless non-root Docker CLI access.2 minsModerate (Grants root-equivalent permissions to the user)
Prefix commands with 'sudo'Strict compliance environments where non-root execution is prohibited.ImmediateLow
chmod 666 /var/run/docker.sockNEVER recommended. Extreme local testing only.1 minHigh (Massive security vulnerability)
Clear space via system pruneWhen encountering 'docker disk full' or 'no space left'.5 minsLow (Only deletes unused resources)

Understanding the Error

One of the most common and immediately frustrating roadblocks for developers working in Linux environments is executing a simple docker ps or docker run command and being greeted with this exact error message:

docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock: connect: permission denied.

This "docker permission denied" error strictly indicates an access control issue. The Docker architecture relies on a client-server model. The Docker CLI (the client) needs to communicate with the Docker daemon (the server process doing the heavy lifting). By default, on Linux systems like Ubuntu, Debian, CentOS, and RHEL, this communication happens over a Unix socket located at /var/run/docker.sock. Because the Docker daemon runs as the root user, this Unix socket is also owned by root. If your standard, non-privileged user attempts to read from or write to this socket, the operating system's kernel blocks the action, resulting in the permission denied error.

However, a permission denied error is just the tip of the iceberg when troubleshooting a "docker not working" scenario. Often, developers confuse permission errors with daemon connectivity issues like "docker connection refused", or they experience cascading failures such as a "docker crash", "docker high cpu", "docker oom" (Out of Memory), or "docker disk full". In this comprehensive guide, we will systematically resolve the socket permission issue and then dive into diagnosing and fixing major Docker daemon and container crashes.


Step 1: Diagnose and Fix "docker permission denied"

Before modifying system groups, verify the current ownership of the Docker socket and your user's group memberships.

1. Check Socket Permissions: Run the following command to inspect the Docker socket:

ls -l /var/run/docker.sock

You should see output similar to: srw-rw---- 1 root docker 0 Feb 23 10:00 /var/run/docker.sock

This output tells us that the socket is a file type s (socket), owned by the user root, and belongs to the group docker. Notice the permissions rw-rw----. This means the owner (root) and members of the group (docker) have read and write access. Anyone else has zero access.

2. Check Your User Groups: Run groups $USER or simply groups to see which groups your current session belongs to. If docker is not listed in the output, you have found your root cause.

3. The Official Fix: Adding Your User to the Docker Group The most secure and standard way to fix "how to fix docker permission denied" without running sudo every time is to add your user account to the docker group.

Execute this command:

sudo usermod -aG docker $USER

Warning: The -aG flags are critical. a stands for append. If you omit the a and just use -G, you will remove your user from all other groups (like sudo or wheel), potentially locking you out of administrative privileges on your machine.

4. Apply the Group Changes Group memberships are evaluated at login. Even though you ran the usermod command, your current shell does not yet know about the new group. You have three options to apply the change:

  1. Fastest: Run newgrp docker in your current terminal. This starts a new shell with the updated group.
  2. More reliable: Log out of your desktop environment or SSH session entirely, and log back in.
  3. Nuclear option: Restart the machine/VM.

Once applied, run docker ps. If it lists the containers (even if empty) without a permission error, you have successfully resolved the primary issue.


Step 2: Troubleshooting "docker connection refused" and Daemon Failures

If you fixed the permissions, but you are now seeing an error like:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

or a raw docker connection refused, this means the socket either doesn't exist or the process listening on the other end (the dockerd daemon) is dead. This commonly triggers developers to search for "docker failed" or "docker crash".

1. Verify Daemon Status Use systemd to check the service health:

sudo systemctl status docker

Look for the Active: line. If it says inactive (dead) or failed, the daemon is not running.

2. Reading the Docker Crash Log To find out why the daemon crashed, you must consult the "docker crash log". Docker logs its daemon output to the system journal. Retrieve the logs using journalctl:

sudo journalctl -u docker.service --no-pager --tail 100

Look for fatal errors near the bottom of the output. Common culprits include corrupted /var/lib/docker directories, invalid configuration in /etc/docker/daemon.json, or resource exhaustion.

3. Docker Core Dump Diagnosis In rare cases, a severe bug in the container runtime (containerd or runc) can result in a "docker core dump". If you see a segmentation fault in the journal logs, check your system's core dump facility using coredumpctl. Often, upgrading the Docker Engine via sudo apt-get upgrade docker-ce resolves underlying binary bugs causing core dumps.


Step 3: Resolving Resource Exhaustion (Disk, Memory, CPU)

Docker is notorious for consuming system resources invisibly over time, leading to cascading failures.

A. Docker No Space Left / Docker Disk Full

Error symptom: write /var/lib/docker/tmp/GetImageBlob...: no space left on device Docker caches images, retains stopped containers, and stores massive build caches. When your host OS runs out of disk space, the Docker daemon will abruptly stop working, failing to pull images or start containers.

The Fix: First, check your disk usage:

df -h /var/lib/docker

If usage is at 100%, aggressively clean up Docker's cache. The following command removes all stopped containers, dangling images, and unused build caches:

docker system prune -a --volumes

Note: The -a flag removes ALL images not currently associated with a running container. Remove this flag if you want to keep downloaded images.

If the disk is still full, check for massive container logs. By default, Docker log files grow infinitely. Truncate them manually, and then configure log rotation in /etc/docker/daemon.json.

B. Docker OOM (Out of Memory) and Application Crashes

When your system runs out of RAM, the Linux Kernel's OOM Killer steps in to save the OS by terminating the most memory-hungry processes. This often results in a "docker out of memory" scenario.

If your container abruptly stops, check its exit code. An exit code of 137 is a strong indicator of an OOMKill. You can verify this by inspecting the container:

docker inspect <container_id> --format='{{.State.OOMKilled}}'

If this returns true, the container exceeded its allocated memory limits or the host system ran out of memory.

Similarly, you can check the host's system logs for "docker oom" interventions:

sudo dmesg -T | grep -i oom

The Fix:

  1. Impose Limits: Always launch containers with memory limits to prevent a single service from crashing the host: docker run -m 512m --memory-swap 1g my-app.
  2. Profile the App: If a container is constantly OOM killed, there is a memory leak in the application code running inside it.
C. Docker High CPU and Docker Slow

If your terminal feels sluggish and applications are timing out, run top or htop on the Linux host. If dockerd or containerd is consuming 100%+ CPU ("docker high cpu"), the daemon might be stuck in a restart loop trying to revive a failing container.

Use docker stats to get a live stream of resource usage across all running containers. If one container is monopolizing the CPU, investigate its internal application logs. You can enforce CPU limits using --cpus="1.5" during docker run.

D. HTTP 502 Bad Gateway / HTTP 504 Gateway Timeout

Web developers frequently encounter "docker 502" or "docker 504" errors when placing an Nginx or Traefik reverse proxy in front of their Docker containers.

  • Docker 502 (Bad Gateway): This means the proxy cannot connect to the backend container. The container might have crashed (check docker ps -a), or the internal application inside the container hasn't bound to the correct network port (e.g., binding to 127.0.0.1 instead of 0.0.0.0 inside the container).
  • Docker 504 (Gateway Timeout): This indicates "docker slow" behavior. The proxy connected to the container, but the application inside took too long to respond. This is usually caused by an overloaded database connection, infinite loops in application code, or a blocked event loop (like in Node.js).

By systematically addressing permission denied errors, verifying daemon health, and monitoring resource constraints like disk space and memory, you can resolve over 95% of Docker infrastructure outages.

Frequently Asked Questions

bash
#!/bin/bash
# Docker Diagnostics & Auto-Fix Script

echo "Diagnosing Docker environment..."

# 1. Check if docker is installed
if ! command -v docker &> /dev/null; then
    echo "[Error] Docker is not installed or not in PATH."
    exit 1
fi

# 2. Check Daemon Status
if ! systemctl is-active --quiet docker; then
    echo "[Warning] Docker daemon is not running. Attempting to start..."
    sudo systemctl start docker
    if ! systemctl is-active --quiet docker; then
        echo "[Error] Failed to start Docker daemon. Check logs: sudo journalctl -u docker.service"
        exit 1
    fi
fi

# 3. Check Permissions
if docker ps &> /dev/null; then
    echo "[Success] Docker daemon is running and permissions are correct."
else
    echo "[Warning] Docker command failed. Likely a permission denied error."
    
    # Check if user is in docker group
    if ! groups $USER | grep -q '\bdocker\b'; then
        echo "[Action] Adding user $USER to the docker group..."
        sudo usermod -aG docker $USER
        echo "[Info] Please run 'newgrp docker' or log out and log back in to apply changes."
    else
        echo "[Info] User $USER is already in the docker group, but access is still failing."
        echo "[Info] Checking socket ownership..."
        ls -l /var/run/docker.sock
        echo "[Action] Try restarting the docker socket and service: sudo systemctl restart docker.socket docker.service"
    fi
fi
E

Error Medic Editorial

Error Medic Editorial consists of senior DevOps engineers and SREs dedicated to bringing you clear, actionable, and production-tested solutions for complex infrastructure incidents and backend troubleshooting.

Sources

Related Articles in Docker

Explore More Linux Sysadmin Guides