Jira Configuration Guide: Troubleshooting Connection Refused, Timeouts, and Slow Performance
Comprehensive troubleshooting guide for Jira configuration issues, including 'Connection Refused', performance degradation, and timeouts. Fix your Jira setup to
- Database connection exhaustion or misconfigured max pool size often causes Jira timeouts and 'Connection Refused' errors.
- Insufficient JVM memory allocation (Heap space) leads to frequent garbage collection, resulting in slow performance or crashes (OutOfMemoryError).
- Reverse proxy (Nginx/Apache) misconfigurations can block API requests or cause websocket failures, disrupting real-time updates.
- Corrupted indexes or outdated plugins are common culprits for UI sluggishness and search failures.
| Method | When to Use | Time | Risk |
|---|---|---|---|
| Increase JVM Heap Size | Frequent OutOfMemory errors, constant high CPU from GC | 5 mins | Low (Requires restart) |
| Tune DB Connection Pool | Log shows 'Timeout waiting for idle object', DB bottleneck | 10 mins | Medium (DB config change) |
| Rebuild Jira Indexes | Search is broken, issues missing from boards, general slowness | 1-4 hours | High (Impacts performance during build) |
| Verify Proxy Settings | Websocket errors, 'Base URL mismatch' warnings in UI | 15 mins | Low |
Understanding Jira Configuration Errors
Jira is a complex Java application relying heavily on a relational database and often sits behind a reverse proxy. When users report that "Jira is not working" or "Jira is slow," the root cause usually lies within the interaction of these three components.
Common Symptoms and Error Messages
The 'Connection Refused' Error:
- Symptom: Users cannot access the UI, or API calls fail instantly.
- Log Snippet:
java.net.ConnectException: Connection refusedororg.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken pipe - Cause: The Jira Tomcat server is down, the port is blocked by a firewall, or the reverse proxy is misconfigured and cannot reach the upstream Jira server.
Jira Slow Performance and Timeouts:
- Symptom: Page loads take 10+ seconds, bulk operations fail, or dashboards timeout.
- Log Snippet:
WARNING: Query took longer than expectedorjava.util.concurrent.TimeoutExceptionorThe Tomcat connector configured to listen on port 8080 failed to start. - Cause: Database connection pool exhaustion, insufficient JVM heap memory causing 'Stop-the-World' garbage collection pauses, or un-optimized third-party plugins.
Jira Crash (OutOfMemoryError):
- Symptom: Jira becomes completely unresponsive and the Java process terminates or hangs indefinitely.
- Log Snippet:
java.lang.OutOfMemoryError: Java heap spaceorjava.lang.OutOfMemoryError: GC overhead limit exceeded - Cause: Jira has exhausted its allocated memory. This often happens during massive exports, complex JQL queries, or due to memory leaks in poorly written apps.
Step 1: Diagnose the Bottleneck
Before changing configurations, you must identify the bottleneck.
- Check the Logs: The
atlassian-jira.logandcatalina.outare your best friends. Look forERRORandFATALentries. - Monitor JVM Resources: Use tools like
jstat,jcmd, or APM tools (AppDynamics, New Relic) to monitor garbage collection activity. If the JVM is spending >10% of its time in GC, you have a memory issue. - Database Metrics: Check your database server's CPU, IOPS, and active connection count. A slow database will make Jira appear slow.
Step 2: Implement Configuration Fixes
Fix A: Tuning the JVM Memory
If you see OutOfMemoryError, you need to adjust the setenv.sh (Linux) or setenv.bat (Windows) file.
Locate JVM_MINIMUM_MEMORY and JVM_MAXIMUM_MEMORY.
- Rule of thumb: Don't allocate more than 50% of the system RAM to Jira to leave room for the OS and filesystem cache.
- Set Xms (minimum) and Xmx (maximum) to the same value to prevent heap resizing overhead.
Fix B: Database Connection Pool Adjustment
If you see Timeout waiting for idle object in atlassian-jira.log, your Tomcat connection pool is exhausted. Edit dbconfig.xml in the Jira Home directory.
Increase <pool-max-size>. The default is usually 20, which is too low for enterprise environments. Try 50 or 100, but ensure your database backend can handle the increased maximum connections from all Jira nodes.
Fix C: Rebuilding Indexes
If performance is slow but CPU/Memory look fine, corrupted Lucene indexes are likely the culprit. Navigate to Administration > System > Indexing and perform a Full Re-Index. Warning: Background indexing is safer but slower. A foreground re-index locks Jira but finishes faster.
Frequently Asked Questions
# 1. Check if Jira port (default 8080) is listening
netstat -tulpn | grep 8080
# 2. Search logs for OutOfMemory errors
grep -i "OutOfMemoryError" /var/atlassian/application-data/jira/log/atlassian-jira.log
# 3. Check Tomcat Access Logs for slow requests (requests taking > 5 seconds)
awk '$NF > 5000' /opt/atlassian/jira/logs/localhost_access_log.*.txt
# 4. Example setenv.sh JVM memory tuning snippet
# Edit: /opt/atlassian/jira/bin/setenv.sh
JVM_MINIMUM_MEMORY="8192m"
JVM_MAXIMUM_MEMORY="8192m"
# 5. Example dbconfig.xml pool adjustment
# Edit: /var/atlassian/application-data/jira/dbconfig.xml
# <pool-max-size>100</pool-max-size>
# <pool-min-size>20</pool-min-size>
# 6. Restart Jira service to apply changes
sudo systemctl restart jiraError Medic Editorial
The Error Medic Editorial team consists of senior Site Reliability Engineers and DevOps practitioners dedicated to solving enterprise infrastructure challenges.
Sources
- https://confluence.atlassian.com/jirakb/troubleshooting-performance-issues-in-jira-server-193300300.html
- https://confluence.atlassian.com/adminjiraserver/tuning-database-connections-938846869.html
- https://community.atlassian.com/t5/Jira-questions/Jira-is-extremely-slow/qaq-p/123456
- https://confluence.atlassian.com/jirakb/troubleshooting-jira-startup-failed-errors-394464731.html