I am using Linux Fedora 40.
Installation fails when kind tries starting the control-plane.
No kind clusters found.
Creating cluster "tekton" ...
✓ Ensuring node image (kindest/node:v1.32.2) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✗ Starting control-plane 🕹️
Deleted nodes: ["tekton-control-plane" "tekton-worker" "tekton-worker2"]
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged tekton-control-plane kubeadm init --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0305 13:37:36.023954 236 initconfiguration.go:261] loading configuration from "/kind/kubeadm.conf"
W0305 13:37:36.024528 236 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W0305 13:37:36.025003 236 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W0305 13:37:36.025357 236 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "JoinConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W0305 13:37:36.025595 236 initconfiguration.go:361] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.32.2
I0305 13:37:36.026863 236 certs.go:112] creating a new certificate authority for ca
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
I0305 13:37:36.149959 236 certs.go:473] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost tekton-control-plane] and IPs [10.96.0.1 172.18.0.3 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0305 13:37:36.656727 236 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0305 13:37:36.771324 236 certs.go:473] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0305 13:37:36.947964 236 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0305 13:37:37.262109 236 certs.go:473] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost tekton-control-plane] and IPs [172.18.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost tekton-control-plane] and IPs [172.18.0.3 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0305 13:37:37.965153 236 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0305 13:37:38.125602 236 kubeconfig.go:111] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0305 13:37:38.362680 236 kubeconfig.go:111] creating kubeconfig file for super-admin.conf
[kubeconfig] Writing "super-admin.conf" kubeconfig file
I0305 13:37:38.479756 236 kubeconfig.go:111] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0305 13:37:38.654963 236 kubeconfig.go:111] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0305 13:37:38.714896 236 kubeconfig.go:111] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0305 13:37:38.794785 236 local.go:66] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0305 13:37:38.794817 236 manifests.go:104] [control-plane] getting StaticPodSpecs
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0305 13:37:38.795036 236 certs.go:473] validating certificate period for CA certificate
I0305 13:37:38.795115 236 manifests.go:130] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0305 13:37:38.795123 236 manifests.go:130] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0305 13:37:38.795127 236 manifests.go:130] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0305 13:37:38.795131 236 manifests.go:130] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0305 13:37:38.795137 236 manifests.go:130] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0305 13:37:38.795851 236 manifests.go:159] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0305 13:37:38.795862 236 manifests.go:104] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0305 13:37:38.796047 236 manifests.go:130] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0305 13:37:38.796055 236 manifests.go:130] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0305 13:37:38.796077 236 manifests.go:130] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0305 13:37:38.796082 236 manifests.go:130] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0305 13:37:38.796086 236 manifests.go:130] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0305 13:37:38.796091 236 manifests.go:130] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0305 13:37:38.796109 236 manifests.go:130] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0305 13:37:38.796774 236 manifests.go:159] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0305 13:37:38.796784 236 manifests.go:104] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0305 13:37:38.796970 236 manifests.go:130] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0305 13:37:38.797414 236 manifests.go:159] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0305 13:37:38.797423 236 kubelet.go:70] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0305 13:37:38.969848 236 loader.go:402] Config loaded from file: /etc/kubernetes/admin.conf
I0305 13:37:38.970240 236 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0305 13:37:38.970253 236 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0305 13:37:38.970259 236 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0305 13:37:38.970263 236 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001047694s
Unfortunately, an error has occurred:
The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
could not initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:112
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:122
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:261
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:450
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:129
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra@v1.8.1/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra@v1.8.1/command.go:1117
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra@v1.8.1/command.go:1041
k8s.io/kubernetes/cmd/kubeadm/app.Run
k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:47
main.main
k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
runtime/proc.go:272
runtime.goexit
runtime/asm_amd64.s:1700
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:262
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:450
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:129
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra@v1.8.1/command.go:985
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra@v1.8.1/command.go:1117
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra@v1.8.1/command.go:1041
k8s.io/kubernetes/cmd/kubeadm/app.Run
k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:47
main.main
k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
runtime/proc.go:272
runtime.goexit
runtime/asm_amd64.s:1700
Expected Behavior
Following this guide I expect a running kind cluster with running Tekton components.
I am using Linux Fedora 40.
Actual Behavior
Installation fails when kind tries starting the control-plane.
Steps to Reproduce the Problem
plumbingroot directory:./hack/tekton_in_kind.sh -kAdditional Info
Kubernetes version:
Output of
kubectl version:Main error thrown by kubelet in tekton-control-plane
Full error output in terminal