Skip to content

Yield Break from RetryState::wait() on last iteration#119

Merged
djc merged 3 commits intomainfrom
break-last
Jul 2, 2025
Merged

Yield Break from RetryState::wait() on last iteration#119
djc merged 3 commits intomainfrom
break-last

Conversation

@djc
Copy link
Owner

@djc djc commented Jun 30, 2025

As suggested by @christianhoelzl in #116 (comment):

One option is to preemptively end the retry if the next iteration would exceed the timeout. This is how acme4j does it.

Doesn't really solve the original problem I highlighted, let's discuss here:

Uhh, not sure how this interaction [between timeout and retry-after] will work? Seems like a strong chance the server might yield a Retry-After of more than the timeout set here, which means we'll only ever try once...

@djc djc requested a review from cpu June 30, 2025 10:56
@christianhoelzl
Copy link
Contributor

That problem seems more in scope of the application that uses instant-acme. Not to retry only once, the application could enter a pending state and retry an certificate enrollment from scratch not before the given Retry-After date-time. For example if the CA does not issue certs the next days because you requested too many recently. If the application can observe the Retry-After value with the instant-acme api, then an application has the data at hand to handle this temporary error condition. Maybe extend types::Error::Api to yield the Retry-After header?

@djc
Copy link
Owner Author

djc commented Jun 30, 2025

That problem seems more in scope of the application that uses instant-acme. Not to retry only once, the application could enter a pending state and retry an certificate enrollment from scratch not before the given Retry-After date-time. For example if the CA does not issue certs the next days because you requested too many recently. If the application can observe the Retry-After value with the instant-acme api, then an application has the data at hand to handle this temporary error condition. Maybe extend types::Error::Api to yield the Retry-After header?

Yielding a specific error variant seems like a decent potential way out of this problem...

@cpu thoughts?

@cpu
Copy link
Collaborator

cpu commented Jul 1, 2025

Yielding a specific error variant seems like a decent potential way out of this problem...

That sounds reasonable to me 👍 I agree with @christianhoelzl that this feels like a place where higher level application logic should take-over.

@djc djc mentioned this pull request Jul 2, 2025
@djc djc requested a review from cpu July 2, 2025 18:15
@djc djc merged commit 489d21b into main Jul 2, 2025
10 checks passed
@djc djc deleted the break-last branch July 2, 2025 19:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants