-
Notifications
You must be signed in to change notification settings - Fork 633
Description
What I want to achieve:
Ensure replicas are spread across availability zones and never run in the same.
current approach:
I am currently doing this through placement preferences.
The problem:
However, they are that. Preferences. If one zone becomes unavailable, the other replica will be scheduled in the other zone which already contains a replica.
This is particularly troublesome, as the service is backed by persistent storage (ebs), which is not replicated across availability zones; meaning that the replica will start up with an empty volume in that zone.
Is it possible to set a placement preference similar to how constraints work? So that I can guarantee a (one) replica will always run in a distinct zone, if the other zone is not available then do not start the replica? If not, is there an alternative way of achieving this?