Error Medic

Jira Troubleshooting Guide: Fixing Connection Refused, Timeouts, and Crashes

Comprehensive Jira configuration guide to resolve connection refused errors, slow performance, and timeouts. Master JVM, DB pool, and Tomcat tuning.

Last updated:
Last verified:
1,060 words
Key Takeaways
  • JVM memory exhaustion (OutOfMemoryError) is the leading cause of Jira crashes and extreme slow performance.
  • Database connection pool exhaustion leads to 'Timeout waiting for connection' and UI freezes.
  • Tomcat thread starvation causes 'Connection refused' and 502/504 gateway timeouts.
  • Quick fix: Analyze catalina.out and atlassian-jira.log, then adjust setenv.sh (JVM), dbconfig.xml (DB), or server.xml (Tomcat).
Fix Approaches Compared
MethodWhen to UseTimeRisk
JVM Memory TuningJira crash, OutOfMemoryError, severe lag10 minsLow
DB Pool AdjustmentDB connection timeouts, slow loading15 minsMedium
Tomcat Thread ConfigConnection refused, proxy timeouts10 minsMedium
Re-indexingSearch not working, missing issues1-4 hoursHigh

Understanding Jira Configuration Errors

When managing an enterprise Atlassian environment, encountering Jira not working, frequent crashes, or 'connection refused' messages can paralyze engineering teams. Jira is a complex Java application relying heavily on Tomcat and an underlying relational database. Misconfigurations in any of these three tiers—JVM, Tomcat, or the Database—will inevitably lead to Jira slow performance or complete outages. This Jira configuration guide will walk you through the most common failure states and their exact fixes.

Common Symptoms and Error Messages

Before diving into configuration files, you must identify the exact error. Below are the most common critical errors found in $JIRA_HOME/log/atlassian-jira.log or <JIRA_INSTALL>/logs/catalina.out:

  1. The OOM Crash: java.lang.OutOfMemoryError: Java heap space or GC overhead limit exceeded. This indicates your Jira JVM is starved for memory.
  2. The Database Timeout: Timeout waiting for connection from pool or Communications link failure. Jira cannot check out a database connection because the pool is exhausted or the DB is unreachable.
  3. Connection Refused: Connection refused (Connection refused) or 502 Bad Gateway from your Nginx/Apache proxy. This usually means Tomcat has crashed, is still starting up, or the thread pool is maxed out.

Step 1: Diagnosing Jira Slow Performance and Crashes

Start by isolating the layer causing the issue.

A. Check the Logs Navigate to your log directories and search for the critical errors mentioned above. A simple grep command can reveal if you are dealing with a memory issue or a database issue.

B. Monitor JVM and Garbage Collection If Jira is running but painfully slow, Garbage Collection (GC) thrashing is the likely culprit. When the heap is nearly full, the JVM pauses application execution to run GC. If this happens constantly, Jira halts. Use jstat -gcutil <PID> 1000 to watch GC activity in real-time. If the FGC (Full GC) count is rising rapidly, you have a memory configuration problem.

C. Inspect Network and Tomcat Ports If you receive 'Jira connection refused', verify if the Java process is actually binding to the expected port (usually 8080) using netstat -tulpn | grep 8080.

Step 2: Fixing Jira Configuration Issues

Fix A: Tuning JVM Memory (setenv.sh / setenv.bat)

If you discovered OutOfMemoryError in your logs, you must adjust Jira's memory limits. This is the single most common Jira troubleshooting fix.

  1. Locate your configuration script: <JIRA_INSTALL>/bin/setenv.sh.
  2. Find the properties JVM_MINIMUM_MEMORY and JVM_MAXIMUM_MEMORY.
  3. Increase these values based on your user tier. For a mid-size enterprise (500-1000 users), 4G to 8G is standard.
# Edit setenv.sh
JVM_MINIMUM_MEMORY="4096m"
JVM_MAXIMUM_MEMORY="4096m"

Pro Tip: Keep minimum and maximum memory identical to prevent the JVM from constantly resizing the heap, which incurs a performance penalty.

Fix B: Adjusting Database Connection Pools (dbconfig.xml)

If your logs show Timeout waiting for connection from pool, Jira is trying to execute database queries but all connections are currently in use.

  1. Navigate to your Jira home directory: $JIRA_HOME/dbconfig.xml.
  2. Locate the <pool-max-size> parameter. The default is often 20.
  3. Increase this value to 40 or 60, depending on your database server's capacity.
<jdbc-datasource>
    ...
    <pool-min-size>20</pool-min-size>
    <pool-max-size>60</pool-max-size>
    <pool-max-wait>30000</pool-max-wait>
    ...
</jdbc-datasource>

After changing dbconfig.xml, a full Jira restart is required.

Fix C: Tomcat Thread Pool Tuning (server.xml)

If Jira is unresponsive, reverse proxies (like Nginx) return 504 Gateway Timeouts, and your DB/JVM seem fine, Tomcat might be running out of HTTP worker threads.

  1. Open <JIRA_INSTALL>/conf/server.xml.
  2. Locate the <Connector> block for your primary HTTP port (e.g., 8080).
  3. Increase the maxThreads attribute and ensure acceptCount is configured properly.
<Connector port="8080" 
           maxThreads="200" 
           minSpareThreads="10" 
           connectionTimeout="20000" 
           enableLookups="false" 
           protocol="HTTP/1.1" 
           acceptCount="100" />

maxThreads determines how many simultaneous requests Jira can handle. acceptCount is the queue size when all threads are busy.

Step 3: Verifying the Fix

After applying these Jira configuration changes, restart the Jira service: sudo systemctl restart jira

Tail the logs carefully during startup: tail -f <JIRA_INSTALL>/logs/catalina.out. Ensure the application initializes completely without throwing DB connection or memory errors. Once the UI is available, monitor performance from the Jira Administration 'System Info' page to confirm the new memory and database pool settings are actively loaded.

Frequently Asked Questions

bash
# Find exact JVM memory errors in Catalina logs
grep -i "OutOfMemoryError" /opt/atlassian/jira/logs/catalina.out

# Check if Jira port 8080 is actively listening
netstat -tulpn | grep 8080

# Real-time monitoring of Garbage Collection (Replace <PID> with Jira's Java process ID)
jstat -gcutil <PID> 1000

# Restart Jira service safely
sudo systemctl restart jira

# Tail the startup logs to verify successful launch
tail -f /opt/atlassian/jira/logs/catalina.out
E

Error Medic Editorial

Error Medic Editorial is a team of Senior DevOps and SRE professionals dedicated to solving complex enterprise infrastructure, CI/CD, and Atlassian toolchain issues.

Sources

Related Articles in Jira

Explore More Enterprise Software Guides