Creating a Kubernetes Cluster with Runtime Observability

With contributions from Sebastian Choren, Adnan Rahić and Ken Hamric.

Kubernetes is an open source system widely used in the cloud native landscape to provide ways to deploy and scale containerized applications in the cloud. Its ability to observe logs and metrics is well-known and documented, but its observability regarding application traces is new.

Here is a brief synopsis of the recent activity in the Kubernetes ecosystem:

In investigating the current state of tracing with Kubernetes, we found very few articles documenting how to enable it, like this article on Kubernetes blog about kubelet observability. We decided to document our findings and provide step-by-step instructions to set Kubernetes up locally and inspect traces.

You’ll learn how to use this instrumentation with Kubernetes to start observing traces on its API (kube-apiserver), node agent (kubelet), and container runtime (containerd) by setting up a local observability environment and later doing a local install of Kubernetes with tracing enabled.

First, install the following tools on your local machine:

  • Docker: a container environment that allows us to run containerized environments
  • k3d: a wrapper to run k3s (a lightweight Kubernetes distribution) with Docker
  • kubectl: a Kubernetes CLI to interact with clusters

Setting up an Observability Stack to Monitor Traces

To set up the observability stack, you’ll run the OpenTelemetry (OTel) Collector, a tool that receives telemetry data from different apps and sends it to a tracing backend. As a tracing backend, you’ll use Jaeger, an open source tool that collects traces and lets you query them.

On your machine, create a directory called kubetracing and create a file called otel-collector.yaml, copy the contents of the following snippet, and save it in a folder of your preference.

This file will configure the OpenTelemetry Collector to receive traces in OpenTelemetry format and export them to Jaeger.

    hash_seed: 22
    sampling_percentage: 100
    timeout: 100ms
    logLevel: debug
    endpoint: jaeger:4317
      insecure: true
      receivers: [otlp]
      processors: [probabilistic_sampler, batch]
      exporters: [otlp/jaeger, logging]

After that, in the same folder, create a docker-compose.yaml file that will have two containers, one for Jaeger and another for the OpenTelemetry Collector.

        - CMD
        - wget
        - --spider
        - localhost:16686
      timeout: 3s
      interval: 1s
      retries: 60
    image: jaegertracing/all-in-one:latest
    restart: unless-stopped
      - 16686:16686
      - --config
      - /otel-local-config.yaml
        condition: service_started
    image: otel/opentelemetry-collector:0.54.0
      - 4317:4317
      - ./otel-collector.yaml:/otel-local-config.yaml

Now, start the observability environment by running the following command in the kubetracing folder:

docker compose up

This will start both Jaeger and the OpenTelemetry Collector, enabling them to receive traces from other apps.

Creating a Kubernetes Cluster with Runtime Observability

With the observability environment set up, create the configuration files to enable OpenTelemetry tracing in kube-apiserver, kubelet, and containerd.

Inside the kubetracing folder, create a subfolder called config that will have the following two files.

First, the apiserver-tracing.yaml, which contains the tracing configuration used by kube-apiserver to export traces containing execution data of the Kubernetes API. In this configuration, set the API to send 100% of the traces with the samplingRatePerMillion config. Set the endpoint as host.k3d.internal:4317 to allow the cluster created by k3d/k3s to call another API on your machine. In this case, the OpenTelemetry Collector deployed via docker compose on port 4317.

kind: TracingConfiguration
endpoint: host.k3d.internal:4317
samplingRatePerMillion: 1000000 # 100%

The second file is kubelet-tracing.yaml, which provides additional configuration for kubelet. Here you’ll enable the feature flag KubeletTracing (a beta feature in Kubernetes 1.27, the current version when this article was written) and set the same tracing settings that were set on kube-apiserver.

kind: KubeletConfiguration
  KubeletTracing: true
  endpoint: host.k3d.internal:4317
  samplingRatePerMillion: 1000000 # 100%

Returning to the kubetracing folder, create the last file, config.toml.tmpl, which is a template file used by k3s to configure containerd. This file is similar to the default configuration that k3s uses, with two more sections at the end of the file that configures containerd to send traces.

version = 2

  path = "{{ .NodeConfig.Containerd.Opt }}"
  stream_server_address = ""
  stream_server_port = "10010"
  enable_selinux = {{ .NodeConfig.SELinux }}
  enable_unprivileged_ports = {{ .EnableUnprivileged }}
  enable_unprivileged_icmp = {{ .EnableUnprivileged }}

{{- if .DisableCgroup}}
  disable_cgroup = true
{{- if .IsRunningInUserNS }}
  disable_apparmor = true
  restrict_oom_score_adj = true

{{- if .NodeConfig.AgentConfig.PauseImage }}
  sandbox_image = "{{ .NodeConfig.AgentConfig.PauseImage }}"

{{- if .NodeConfig.AgentConfig.Snapshotter }}
  snapshotter = "{{ .NodeConfig.AgentConfig.Snapshotter }}"
  disable_snapshot_annotations = {{ if eq .NodeConfig.AgentConfig.Snapshotter "stargz" }}false{{else}}true{{end}}
{{ if eq .NodeConfig.AgentConfig.Snapshotter "stargz" }}
{{ if .NodeConfig.AgentConfig.ImageServiceSocket }}
cri_keychain_image_service_path = "{{ .NodeConfig.AgentConfig.ImageServiceSocket }}"
enable_keychain = true
{{ if .PrivateRegistryConfig }}
{{ if .PrivateRegistryConfig.Mirrors }}
{{range $k, $v := .PrivateRegistryConfig.Mirrors }}
  endpoint = [{{range $i, $j := $v.Endpoints}}{{if $i}}, {{end}}{{printf "%q" .}}{{end}}]
{{if $v.Rewrites}}
{{range $pattern, $replace := $v.Rewrites}}
    "{{$pattern}}" = "{{$replace}}"
{{range $k, $v := .PrivateRegistryConfig.Configs }}
{{ if $v.Auth }}
  {{ if $v.Auth.Username }}username = {{ printf "%q" $v.Auth.Username }}{{end}}
  {{ if $v.Auth.Password }}password = {{ printf "%q" $v.Auth.Password }}{{end}}
  {{ if $v.Auth.Auth }}auth = {{ printf "%q" $v.Auth.Auth }}{{end}}
  {{ if $v.Auth.IdentityToken }}identitytoken = {{ printf "%q" $v.Auth.IdentityToken }}{{end}}
{{ if $v.TLS }}
  {{ if $v.TLS.CAFile }}ca_file = "{{ $v.TLS.CAFile }}"{{end}}
  {{ if $v.TLS.CertFile }}cert_file = "{{ $v.TLS.CertFile }}"{{end}}
  {{ if $v.TLS.KeyFile }}key_file = "{{ $v.TLS.KeyFile }}"{{end}}
  {{ if $v.TLS.InsecureSkipVerify }}insecure_skip_verify = true{{end}}

{{- if not .NodeConfig.NoFlannel }}
  bin_dir = "{{ .NodeConfig.AgentConfig.CNIBinDir }}"
  conf_dir = "{{ .NodeConfig.AgentConfig.CNIConfDir }}"

  runtime_type = "io.containerd.runc.v2"

  SystemdCgroup = {{ .SystemdCgroup }}

{{ if .PrivateRegistryConfig }}
{{ if .PrivateRegistryConfig.Mirrors }}
{{range $k, $v := .PrivateRegistryConfig.Mirrors }}
  endpoint = [{{range $i, $j := $v.Endpoints}}{{if $i}}, {{end}}{{printf "%q" .}}{{end}}]
{{if $v.Rewrites}}
{{range $pattern, $replace := $v.Rewrites}}
    "{{$pattern}}" = "{{$replace}}"

{{range $k, $v := .PrivateRegistryConfig.Configs }}
{{ if $v.Auth }}
  {{ if $v.Auth.Username }}username = {{ printf "%q" $v.Auth.Username }}{{end}}
  {{ if $v.Auth.Password }}password = {{ printf "%q" $v.Auth.Password }}{{end}}
  {{ if $v.Auth.Auth }}auth = {{ printf "%q" $v.Auth.Auth }}{{end}}
  {{ if $v.Auth.IdentityToken }}identitytoken = {{ printf "%q" $v.Auth.IdentityToken }}{{end}}
{{ if $v.TLS }}
  {{ if $v.TLS.CAFile }}ca_file = "{{ $v.TLS.CAFile }}"{{end}}
  {{ if $v.TLS.CertFile }}cert_file = "{{ $v.TLS.CertFile }}"{{end}}
  {{ if $v.TLS.KeyFile }}key_file = "{{ $v.TLS.KeyFile }}"{{end}}
  {{ if $v.TLS.InsecureSkipVerify }}insecure_skip_verify = true{{end}}

{{range $k, $v := .ExtraRuntimes}}
  runtime_type = "{{$v.RuntimeType}}"
  BinaryName = "{{$v.BinaryName}}"

  endpoint = "host.k3d.internal:4317"
  protocol = "grpc"
  insecure = true

  sampling_ratio = 1.0
  service_name = "containerd"

After creating these files, open a terminal inside the kubetracing folder and run k3d to create a cluster. Before running this command, replace the [CURRENT_PATH] placeholder for the entire path of the kubetracing folder. You can retrieve it by running the echo $PWD command in the terminal in that folder.

k3d cluster create tracingcluster \
  --image=rancher/k3s:v1.27.1-k3s1 \
  --volume '[CURRENT_PATH]/config.toml.tmpl:/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl@server:*' \
  --volume '[CURRENT_PATH]/config:/etc/kube-tracing@server:*' \
  --k3s-arg '--kube-apiserver-arg=tracing-config-file=/etc/kube-tracing/apiserver-tracing.yaml@server:*' \
  --k3s-arg '--kube-apiserver-arg=feature-gates=APIServerTracing=true@server:*' \
  --k3s-arg '--kubelet-arg=config=/etc/kube-tracing/kubelet-tracing.yaml@server:*'

This command will create a Kubernetes cluster with version v1.27.1, and set up in three docker containers on your machine. If you run the command kubectl cluster-info now, you will see this output:

Kubernetes control plane is running at
CoreDNS is running at
Metrics-server is running at

Going back to the logs of the observability environment, you should see some spans of internal Kubernetes operations being published in OpenTelemetry Collector, like this:

Span #90
    Trace ID       : 03a7bf9008d54f02bcd4f14aa5438202
    Parent ID      :
    ID             : d7a10873192f7066
    Name           : KubernetesAPI
    Kind           : SPAN_KIND_SERVER
    Start time     : 2023-05-18 01:51:44.954563708 +0000 UTC
    End time       : 2023-05-18 01:51:44.957555323 +0000 UTC
    Status code    : STATUS_CODE_UNSET
    Status message :
     -> net.transport: STRING(ip_tcp)
     -> net.peer.ip: STRING(
     -> net.peer.port: INT(54678)
     -> STRING(
     -> INT(6443)
     -> STRING(/api/v1/namespaces/kube-system/pods/helm-install-traefik-crd-8w4wd)
     -> http.server_name: STRING(KubernetesAPI)
     -> http.user_agent: STRING(k3s/v1.27.1+k3s1 (linux/amd64) kubernetes/bc5b42c)
     -> http.scheme: STRING(https)
     -> STRING(
     -> http.flavor: STRING(2)
     -> http.method: STRING(GET)
     -> http.wrote_bytes: INT(4724)
     -> http.status_code: INT(200)

Testing the Cluster Runtime

With the Observability environment and the Kubernetes cluster set up, you can now trigger commands against Kubernetes and see traces of these actions in Jaeger.

Open the browser, and navigate to the Jaeger UI located at http://localhost:16686/search. You’ll see that the apiserver, containerd, and kubelet services are publishing traces:

Jaeger screen with services dropdown open showing apiserver, containerd and kubelet services as options

Choose apiserver and click on “Find Traces”. Here you see traces from the Kubernetes control plane:

Jaeger screen showing a list of spans found for apiserver

Let’s run a sample command against Kubernetes with kubectl, like running an echo:

$ kubectl run -it --rm --restart=Never --image=alpine echo-command -- echo hi

# Output
# If you don't see a command prompt, try pressing enter.
# warning: couldn't attach to pod/echo-command, falling back to streaming logs: unable to upgrade connection: container echo-command not found in pod echo-command_default
# Hi
# pod "echo-command" deleted

And now, open Jaeger again, choose the kubelet service, operation syncPod, and add the tag k8s.pod=default/echo-command, you should be able to see spans related to this pod:

Jaeger screen showing a list of spans found for the syncPod operation on kubelet service

Expanding one trace, you’ll see the operations that created this pod:

Jaeger screen showing a single syncPod expanded


Even in beta, both traces for kubelet and apiserver can help a developer understand what’s happening under the hood in Kubernetes and start debugging issues.

This will be helpful for developers that create custom tasks, like Kubernetes Operators that update internal resources to add more functionalities to Kubernetes.

As a team focused on building an open source tool in the observability space, the opportunity to help the overall OpenTelemetry community was important to us. That’s why we were researching finding new ways of collecting traces from the core Kubernetes engine. With the current level of observability being exposed by Kubernetes we wanted to publish our findings in order to help others interested in seeing the current state of distributed tracing in the Kubernetes engine. Daniel Dias and Sebastian Choren are working on Tracetest, an open source tool that allows you to develop and test your distributed system with OpenTelemetry. It works with any OTel compatible system and enables trace–based tests to be created. Check it out at

The example sources used in this article, and setup instructions are available from the Tracetest repository.