-
Notifications
You must be signed in to change notification settings - Fork 926
Description
Describe the bug
When working with QBusinessAsyncClient async chat method and later closing the client we were receiving exceptions due to timeouts waiting for the pool closure from software.amazon.awssdk.http.nio.netty.internal.AwaitCloseChannelPoolMap#close. The client was used within a spring boot kotlin coroutines environment, where blocking a thread for such a long duration is unexpected. Most likely there was some deadlock.
Ideally, the close
method should not block for such a long duration. Maybe you could provide an async close method, on which the callers could await?
Regression Issue
- Select this option if this issue appears to be a regression.
Expected Behavior
QBusinessAsyncClient.close doesn't block the current thread for 5 seconds. Then some internal waiting in software.amazon.awssdk.http.nio.netty.internal.http2.Http2MultiplexedChannelPool#close blocks it even further for 10 seconds.
Current Behavior
QBusinessAsyncClient.close in some conditions could block the current thread for 5+ seconds.
Reproduction Steps
- create a simple controller in Spring that returns a flow
- Create QBusinessAsyncClient and start a chat
- using
callbackFlow
push messages received from chat upstream - at the end of the stream, close the client
Possible Solution
No response
Additional Information/Context
No response
AWS Java SDK version used
2.31.77
JDK version used
OpenJDK Runtime Environment Corretto-21.0.7.6.1
Operating System and version
MacOS Version 15.5 (24F74), but happens on linux as well