OpsCanary
kubernetesworkloadsPractitioner

Mastering DaemonSets: Ensuring Node-Local Facilities in Kubernetes

5 min read Kubernetes DocsApr 28, 2026
Share
PractitionerHands-on experience recommended

DaemonSets exist to solve a common problem in Kubernetes: ensuring that specific services run on every eligible node. This is particularly important for tasks like logging, monitoring, or any service that requires node-local access. By using a DaemonSet, you can automate the deployment of Pods that provide these facilities, ensuring consistency and reliability across your cluster.

A DaemonSet works by creating a Pod for each eligible node. The DaemonSet controller uses the .spec.affinity.nodeAffinity field to match Pods to the target hosts. Once a Pod is created, the default scheduler takes over, binding the Pod to the node by setting the .spec.nodeName field. If a new Pod cannot fit on a node, the scheduler may preempt existing Pods based on priority. This means that if you want to ensure your DaemonSet Pods always run, consider setting the .spec.template.spec.priorityClassName to a higher priority class. Additionally, the DaemonSet automatically adds tolerations to its Pods, allowing them to run on nodes that are marked as unschedulable.

In production, you need to be aware of a few key gotchas. First, the .spec.selector cannot be mutated after creation, which can lead to orphaned Pods if not managed carefully. Also, while DaemonSets can run on unschedulable nodes due to the automatic tolerations, this behavior might not always be desirable. Always evaluate your node configurations and Pod priorities to avoid unexpected evictions or scheduling conflicts.

Key takeaways

  • Configure .spec.template.spec.priorityClassName to ensure DaemonSet Pods preempt lower-priority Pods.
  • Use .spec.affinity.nodeAffinity to control which nodes your DaemonSet Pods are scheduled on.
  • Avoid mutating .spec.selector to prevent orphaning existing Pods.

Why it matters

In production, DaemonSets ensure critical services like logging and monitoring run consistently across all nodes, which is vital for maintaining observability and reliability in your applications.

Code examples

YAML
apiVersion:apps/v1kind:DaemonSetmetadata:name:fluentd-elasticsearchnamespace:kube-systemlabels:k8s-app:fluentd-loggingspec:selector:matchLabels:name:fluentd-elasticsearchtemplate:metadata:labels:name:fluentd-elasticsearchspec:tolerations:# these tolerations are to have the daemonset runnable on control plane nodes# remove them if your control plane nodes should not run pods-key:node-role.kubernetes.io/control-planeoperator:Existseffect:NoSchedule-key:node-role.kubernetes.io/masteroperator:Existseffect:NoSchedulecontainers:-name:fluentd-elasticsearchimage:quay.io/fluentd_elasticsearch/fluentd:v5.0.1resources:limits:memory:200Mirequests:cpu:100mmemory:200MivolumeMounts:-name:varlogmountPath:/var/log# it may be desirable to set a high priority class to ensure that a DaemonSet Pod# preempts running Pods# priorityClassName: importantterminationGracePeriodSeconds:30volumes:-name:varloghostPath:path:/var/log
shell
kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml
YAML
nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchFields:-key:metadata.nameoperator:Invalues:- target-host-name

When NOT to use this

The official docs don't call out specific anti-patterns here. Use your judgment based on your scale and requirements.

Want the complete reference?

Read official docs

Test what you just learned

Quiz questions written from this article

Take the quiz →

Get the daily digest

One email. 5 articles. Every morning.

No spam. Unsubscribe anytime.