You are viewing documentation for Kubernetes version: v1.19
Kubernetes v1.19 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
Well-Known Labels, Annotations and Taints
Kubernetes reserves all labels and annotations in the kubernetes.io namespace.
This document serves both as a reference to the values and as a coordination point for assigning values.
kubernetes.io/arch
Example: kubernetes.io/arch=amd64
Used on: Node
The Kubelet populates this with runtime.GOARCH
as defined by Go. This can be handy if you are mixing arm and x86 nodes.
kubernetes.io/os
Example: kubernetes.io/os=linux
Used on: Node
The Kubelet populates this with runtime.GOOS
as defined by Go. This can be handy if you are mixing operating systems in your cluster (for example: mixing Linux and Windows nodes).
beta.kubernetes.io/arch (deprecated)
This label has been deprecated. Please use kubernetes.io/arch
instead.
beta.kubernetes.io/os (deprecated)
This label has been deprecated. Please use kubernetes.io/os
instead.
kubernetes.io/hostname
Example: kubernetes.io/hostname=ip-172-20-114-199.ec2.internal
Used on: Node
The Kubelet populates this label with the hostname. Note that the hostname can be changed from the "actual" hostname by passing the --hostname-override
flag to the kubelet
.
This label is also used as part of the topology hierarchy. See topology.kubernetes.io/zone for more information.
beta.kubernetes.io/instance-type (deprecated)
Note: Starting in v1.17, this label is deprecated in favor of node.kubernetes.io/instance-type.
node.kubernetes.io/instance-type
Example: node.kubernetes.io/instance-type=m3.medium
Used on: Node
The Kubelet populates this with the instance type as defined by the cloudprovider
.
This will be set only if you are using a cloudprovider
. This setting is handy
if you want to target certain workloads to certain instance types, but typically you want
to rely on the Kubernetes scheduler to perform resource-based scheduling. You should aim to schedule based on properties rather than on instance types (for example: require a GPU, instead of requiring a g2.2xlarge
).
failure-domain.beta.kubernetes.io/region (deprecated)
See topology.kubernetes.io/region.
Note: Starting in v1.17, this label is deprecated in favor of topology.kubernetes.io/region.
failure-domain.beta.kubernetes.io/zone (deprecated)
See topology.kubernetes.io/zone.
Note: Starting in v1.17, this label is deprecated in favor of topology.kubernetes.io/zone.
topology.kubernetes.io/region
Example:
topology.kubernetes.io/region=us-east-1
See topology.kubernetes.io/zone.
topology.kubernetes.io/zone
Example:
topology.kubernetes.io/zone=us-east-1c
Used on: Node, PersistentVolume
On Node: The kubelet
or the external cloud-controller-manager
populates this with the information as provided by the cloudprovider
. This will be set only if you are using a cloudprovider
. However, you should consider setting this on nodes if it makes sense in your topology.
On PersistentVolume: topology-aware volume provisioners will automatically set node affinity constraints on PersistentVolumes
.
A zone represents a logical failure domain. It is common for Kubernetes clusters to span multiple zones for increased availability. While the exact definition of a zone is left to infrastructure implementations, common properties of a zone include very low network latency within a zone, no-cost network traffic within a zone, and failure independence from other zones. For example, nodes within a zone might share a network switch, but nodes in different zones should not.
A region represents a larger domain, made up of one or more zones. It is uncommon for Kubernetes clusters to span multiple regions, While the exact definition of a zone or region is left to infrastructure implementations, common properties of a region include higher network latency between them than within them, non-zero cost for network traffic between them, and failure independence from other zones or regions. For example, nodes within a region might share power infrastructure (e.g. a UPS or generator), but nodes in different regions typically would not.
Kubernetes makes a few assumptions about the structure of zones and regions:
- regions and zones are hierarchical: zones are strict subsets of regions and no zone can be in 2 regions
- zone names are unique across regions; for example region "africa-east-1" might be comprised of zones "africa-east-1a" and "africa-east-1b"
It should be safe to assume that topology labels do not change. Even though labels are strictly mutable, consumers of them can assume that a given node is not going to be moved between zones without being destroyed and recreated.
Kubernetes can use this information in various ways. For example, the scheduler automatically tries to spread the Pods in a ReplicaSet across nodes in a single-zone cluster (to reduce the impact of node failures, see kubernetes.io/hostname). With multiple-zone clusters, this spreading behavior also applies to zones (to reduce the impact of zone failures). This is achieved via SelectorSpreadPriority.
SelectorSpreadPriority is a best effort placement. If the zones in your cluster are heterogeneous (for example: different numbers of nodes, different types of nodes, or different pod resource requirements), this placement might prevent equal spreading of your Pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce the probability of unequal spreading.
The scheduler (through the VolumeZonePredicate predicate) also will ensure that Pods, that claim a given volume, are only placed into the same zone as that volume. Volumes cannot be attached across zones.
If PersistentVolumeLabel
does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support for PersistentVolumeLabel
). With PersistentVolumeLabel
, the scheduler prevents Pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't need to add the zone labels to the volumes at all.