-
Notifications
You must be signed in to change notification settings - Fork 15.5k
Description
Apache Airflow version
2.6.3
What happened
It seems that with the new aws
provider package, when using the deferable
keyword in the EcsRunTaskOperator
- the execution_timeout
is ignored and the task is killed from another timeout, the trigger timeout seems to be timeout=timedelta(seconds=self.waiter_max_attempts * self.waiter_delay + 60)
.
Also, it seems when the trigger
fires that timeout - it seems the task return "success" even though it hasn't finished.
It seems this doesn't kill the task either.
What you think should happen instead
The execution_timeout
should be used in the trigger timeout, or at least a warning if that timeout is overriden or is smaller
How to reproduce
Run an EcsRunTaskOperator
task with deferable mode, put a large execution_timeout
and a small number of waiter_retries
. The task should terminates based on the trigger
timing out before the execution_timeout
is up.
Operating System
linux ubuntu
Versions of Apache Airflow Providers
No response
Deployment
Other Docker-based deployment
Deployment details
No response
Anything else
No response
Are you willing to submit PR?
- Yes I am willing to submit a PR!
Code of Conduct
- I agree to follow this project's Code of Conduct