Skip to content

Conversation

@ehaydenr
Copy link
Contributor

The current logic sends MAX_STREAMS too aggressively. As the next MAX_STREAMS value increases, the condition for whether or not to send MAX_STREAMS becomes unconditionally true. This change fixes that so that MAX_STREAMS is only sent when 50% or more streams have completed.

… connections

The current logic sends MAX_STREAMS too aggressively. As the next MAX_STREAMS value
increases, the condition for whether or not to send MAX_STREAMS becomes unconditionally
true. This change fixes that so that MAX_STREAMS is only sent when 50% or more streams
have completed.
@ehaydenr ehaydenr requested a review from a team as a code owner November 13, 2025 18:26
.saturating_sub(self.peer_opened_streams_bidi);
self.local_max_streams_bidi_next != self.local_max_streams_bidi &&
self.local_max_streams_bidi_next / 2 >
self.local_max_streams_bidi - self.peer_opened_streams_bidi
Copy link
Contributor

@antoniovicente antoniovicente Nov 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the logic before was to send an increase following each stream completion?

Consider an application that requires 9 active requests at all times and the stream limit is 10. If 2 of the streams close and the client attempts to re-create them, it will only be able to create 1 of the two and remain in this state until 3 other streams close and trigger a max stream update.

So the application would need 2 QUIC connections in order to work reliably in this case even though the number of streams it wants to keep open is less than the per connection stream limit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case, I would expect "available" to be 1 which is less than 10/2, so we should be sending MAX_STREAMS in that case. I added another test case to try and capture this scenario.

@ghedo ghedo enabled auto-merge (squash) November 20, 2025 09:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants