The Kubernetes API documentation claims about pod disruption budgets (emphasis mine):
If you set
maxUnavailableto 0% or 0, or you setminAvailableto 100% or the number of replicas, you are requiring zero voluntary evictions. When you set zero voluntary evictions for a workload object such as ReplicaSet, then you cannot successfully drain a Node running one of those Pods. If you try to drain a Node where an unevictable Pod is running, the drain never completes.
Is that really how it's implemented in practice, i.e. will a maxUnavailable: 0 prevent node draining?
I would hope and expect that the normal cluster behavior for voluntary disruptions is that before any pod gets terminated a new pod is started up elsewhere as a replacement, in order to keep (on a best-effort basis) the number of pods exactly at the level that I've either specified or that the auto-scaler has determined. This strategy would not require any unavailability to be tolerated in order to migrate pods from one node to another.
Kubernetes explicitly allows to configure this behavior for rolling updates via maxSurge. But is that not available for other cluster operations?