-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Description
Describe the bug
We have noticed what seems to be a Redis connection 'leak' of some sort with the quarkus-redis-cache
extension. There seems to be a scenario where the valueLoader.apply(key)
never completes and therefore the .onTermination().call(con::close)
in RedisCacheImpl.java:472
is never executed. This leads to connections being occupied and never returned to the pool. The image below showcases the active connections of one of our applications. As you can see, all the different instances have an ever increasing amount of connections either untill the instance is powered down because of Redis 'health' issue (at 70 connections) or an decrease in demand.

I have also configured the pool-cleaner-interval and pool-recycle-interval but these don't seem to help. I think the invocations are waiting indefinitely and are therefore not eligible to be cleaned or recycled.
Expected behavior
Connections should be recycled, no matter what happens to the invocation.
Actual behavior
Connections are not closed because .onTermination().call(con::close)
in RedisCacheImpl.java:472
is never executed. This results in the connection pool slowly filling up and eventually not being able to acquire connections.
How to Reproduce?
I have created a small reproducer: https://github.com/peterkrol/quarkus-cache-connection-reproducer
Once running, call the GET /cache-test
endpoint and check out the /q/health
page. The request should complete after 10 seconds and the number of currently active connections should be 3. This reproducer makes little sense, but it does highlight the issue.
Output of uname -a
or ver
Darwin OMD243828 24.5.0 Darwin Kernel Version 24.5.0: Tue Apr 22 19:53:27 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6041 arm64
Output of java -version
openjdk version "21.0.6" 2025-01-21
Quarkus version or git rev
3.23.2
Build tool (ie. output of mvnw --version
or gradlew --version
)
Apache Maven 3.9.2 (c9616018c7a021c1c39be70fb2843d6f5f9b8a1c)
Additional information
No response