Kubernetes removed the legacy scheduling policy mechanism in version v1.23. Prior to that release, operators could customize pod placement with --policy-config-file
or --policy-configmap
flags. The new Scheduler Configuration API replaces those flags and delivers more flexibility and safety.
For details on the new Scheduler Configuration API, see Understanding the Kubernetes Scheduler.
TL;DR
- Legacy scheduling policies used predicates and priorities to filter and score nodes.
- Kube-scheduler flags
policy-config-file
,policy-configmap
,policy-configmap-namespace
anduse-legacy-policy-config
deprecated in v1.23. - Users must switch to Scheduler Configuration via
ComponentConfig
API instead of policy files. - Policy file syntax: kind Policy, apiVersion v1, lists of predicates and priorities in JSON or YAML.
- Migration requires mapping legacy policy sections into Scheduler Configuration plugins and plugin configs.
- Custom flows still support event-driven scheduling via preFilter, filter, score, reserve, permit, preBind, bind plugin points.
Scheduling Policies Overview
Scheduling policies provided a way to tune scheduling decisions in Kubernetes releases before v1.23. Administrators could define which nodes qualified to run a pod (predicates) and assign weights to node scores (priorities). The kube-scheduler
binary accepted a policy file or a ConfigMap
containing the JSON or YAML policy object.
Legacy Scheduling Policies in Kubernetes <1.23
Legacy policies followed this lifecycle:
- Define a Policy object in JSON or YAML.
- Pass
--policy-config-file=/path/to/policy.json
or--policy-configmap=
at kube-scheduler startup. - Scheduler read the policy, loaded predicates and priorities in memory.
- Pods triggered the predicate phase, then the priority phase for scoring.
- Scheduler picked the highest-scoring node and bound the pod.
{
"kind": "Policy",
"apiVersion": "v1",
"predicates": [
{"name": "PodFitsHostPorts"},
{"name": "PodFitsResources"}
],
"priorities": [
{"name": "LeastRequestedPriority", "weight": 1},
{"name": "SelectorSpreadPriority", "weight": 2}
]
}
Alternate YAML format:
kind: Policy
apiVersion: v1
predicates:
- name: PodFitsHostPorts
- name: PodFitsResources
priorities:
- name: LeastRequestedPriority
weight: 1
- name: SelectorSpreadPriority
weight: 2
Scheduling Policies: Predicates and Priorities
Predicates execute boolean checks. They exclude nodes that can’t host the pod. Common predicates:
- PodFitsResources – checks CPU and memory requests.
- PodFitsHostPorts – ensures port availability.
- MatchNodeSelector – matches node labels to pod NodeSelector.
Priorities assign node scores. Higher weight increases influence on final sort. Examples:
- LeastRequestedPriority – favors nodes with more free resources.
- BalancedResourceAllocation – balances CPU and memory usage.
- NodePreferAvoidPodsPriority – avoids nodes marked for eviction.
Configuring Scheduling Policies via policy-config-file
To apply a local policy file:
kube-scheduler \
--policy-config-file=/etc/kubernetes/scheduler-policy.yaml \
--use-legacy-policy-config
The flag --use-legacy-policy-config
ensured compatibility with older code paths. Operators could also tune --policy-configmap-namespace
.
Scheduling Policies: ConfigMap-based Approach
ConfigMap mode allowed dynamic updates. Steps:
- Create a ConfigMap in kube-system namespace:
apiVersion: v1
kind: ConfigMap
metadata:
name: scheduler-policy
namespace: kube-system
data:
policy.json: |
{"kind":"Policy","apiVersion":"v1","predicates":[],"priorities":[]}
- Launch kube-scheduler with
--policy-configmap
flag:
kube-scheduler \
--policy-configmap=scheduler-policy \
--policy-configmap-namespace=kube-system
Limitations and Deprecation of Scheduling Policies
Kubernetes deprecated legacy scheduling policy flags in v1.23. The scheduler team removed:
--policy-config-file
, --policy-configmap
, --policy-configmap-namespace
, --use-legacy-policy-config
.
Reasons for deprecation:
- Lack of schema validation for policy files.
- Complexity of JSON-based plugin selection.
- Inability to reload policy without restart.
Migrating Scheduling Policies to Scheduler Configuration
The replacement uses the ComponentConfig
API. You supply a static YAML file with apiVersion kubescheduler.config.k8s.io/v1beta2
. It defines plugin chains for each scheduling stage.
apiVersion: kubescheduler.config.k8s.io/v1beta2
kind: KubeSchedulerConfiguration
profiles:
- schedulerName: default-scheduler
pluginConfig:
- name: PodTopologySpread
args:
defaultConstraints:
- maxSkew: 1
topologyKey: zone
plugins:
filter:
enabled:
- name: PodFitsResources
score:
enabled:
- name: LeastAllocated
weight: 1
Scheduler Configuration vs Scheduling Policies
The new model treats predicates and priorities as filter and score plugins. ComponentConfig
enforces schema validation at startup. It supports dynamic plugin loading in future versions. You can group multiple profiles to support multi-scheduler setups.
Use Cases for Custom Scheduling Policies
Operators may still require custom logic:
- Enforce zone-aware placement for HA workloads.
- Isolate batch jobs on specific node pools.
- Implement priority inheritance across namespaces.
Custom plugins integrate with the scheduler framework. Build a Go module that implements the Filter or Score interface, compile as a plugin, then reference it in the scheduler config.
Workflow Diagram

References
- Scheduling Policies | Kubernetes Official Documentation.
- GitHub issue: remove scheduler policy config
Suggested Reading
PostHashID: f0a1fd57951196097f539f7d820148d7b51af9db62160758322982e10dbd63da