Delete listener resources without requeueing on each call#4289
Delete listener resources without requeueing on each call#4289nikola-jokic merged 1 commit intomasterfrom
Conversation
There was a problem hiding this comment.
Pull Request Overview
This PR optimizes the deletion process for AutoscalingListener resources by eliminating unnecessary requeuing after each cleanup step. Instead of returning immediately after initiating each resource deletion (pod, secrets, role binding, role, service account), the controller now attempts to delete all resources in a single reconciliation loop and only requeues if any resources still exist.
Key Changes:
- Modified cleanup logic to continue through all resource deletions instead of returning early after each one
- Added a 1-second requeue delay when resources still exist to avoid tight reconciliation loops
- Renamed return variable from
donetorequeuefor clarity
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| } | ||
| } | ||
| return false, nil | ||
| requeue = true |
There was a problem hiding this comment.
The requeue flag is set but the function doesn't return immediately, potentially overwriting this value. Since the pod still exists, you should return here to maintain the original behavior where subsequent cleanup steps are only attempted after the pod is deleted. Otherwise, this creates a logic change where all resources are attempted to be deleted regardless of pod status.
| requeue = true | |
| return true, nil |
| logger.Info("Listener service account is deleted") | ||
|
|
||
| return true, nil | ||
| return requeue, nil |
There was a problem hiding this comment.
The requeue variable is uninitialized if all resources are already deleted (all get operations return NotFound). This will return false by default in Go, but it's safer to explicitly initialize requeue := false at the start of the function to make the intent clear and avoid potential issues if the function logic changes.
During the delete, listener requeues after each cleanup step. This can take a long time on busy clusters.
This change aims to implement the same cleanup way we do with ephemeral runners (issuing multiple delete statements in a single loop). This way, cleanup should be much faster.
Fixes #4200