Thread

JP
Jiten Purswani4:07 AMOpen in Slack
Hi
i am getting this error while running the quickstart command
Deleted nodes: ["archestra-mcp-control-plane"]
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged archestra-mcp-control-plane kubeadm init --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0213 03:29:53.144936 247 initconfiguration.go:260] loading configuration from "/kind/kubeadm.conf"
W0213 03:29:53.145640 247 common.go:100] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old-config-file --new-config new-config-file', which will write the new, similar spec using a newer API version.
W0213 03:29:53.146518 247 common.go:100] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old-config-file --new-config new-config-file', which will write the new, similar spec using a newer API version.
W0213 03:29:53.147052 247 common.go:100] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "JoinConfiguration"). Please use 'kubeadm config migrate --old-config old-config-file --new-config new-config-file', which will write the new, similar spec using a newer API version.
W0213 03:29:53.147439 247 initconfiguration.go:361] [config] WARNING: Ignored configuration document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.35.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0213 03:29:53.149466 247 certs.go:111] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0213 03:29:53.186784 247 certs.go:472] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [archestra-mcp-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.20.0.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0213 03:29:53.392276 247 certs.go:111] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0213 03:29:53.448463 247 certs.go:472] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0213 03:29:53.496347 247 certs.go:111] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0213 03:29:53.620613 247 certs.go:472] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [archestra-mcp-control-plane localhost] and IPs [172.20.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [archestra-mcp-control-plane localhost] and IPs [172.20.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0213 03:29:54.030291 247 certs.go:77] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0213 03:29:54.251275 247 kubeconfig.go:111] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0213 03:29:54.376731 247 kubeconfig.go:111] creating kubeconfig file for super-admin.conf
[kubeconfig] Writing "super-admin.conf" kubeconfig file
I0213 03:29:54.455445 247 kubeconfig.go:111] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0213 03:29:54.496423 247 kubeconfig.go:111] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0213 03:29:54.541698 247 kubeconfig.go:111] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0213 03:29:54.254741 247 local.go:67] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0213 03:29:54.254814 247 manifests.go:125] [control-plane] getting StaticPodSpecs
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0213 03:29:54.256137 247 certs.go:472] validating certificate period for CA certificate
I0213 03:29:54.256296 247 manifests.go:151] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0213 03:29:54.257162 247 manifests.go:151] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0213 03:29:54.257171 247 manifests.go:151] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0213 03:29:54.257174 247 manifests.go:151] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0213 03:29:54.257178 247 manifests.go:151] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0213 03:29:54.258932 247 manifests.go:178] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0213 03:29:54.258971 247 manifests.go:125] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0213 03:29:54.259673 247 manifests.go:151] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0213 03:29:54.259707 247 manifests.go:151] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0213 03:29:54.259712 247 manifests.go:151] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0213 03:29:54.259715 247 manifests.go:151] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0213 03:29:54.259718 247 manifests.go:151] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0213 03:29:54.259720 247 manifests.go:151] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0213 03:29:54.259723 247 manifests.go:151] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0213 03:29:54.264736 247 manifests.go:178] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0213 03:29:54.264781 247 manifests.go:125] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0213 03:29:54.265009 247 manifests.go:151] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0213 03:29:54.266366 247 manifests.go:178] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0213 03:29:54.266403 247 kubelet.go:69] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0213 03:29:53.300675 247 loader.go:405] Config loaded from file: /etc/kubernetes/admin.conf
I0213 03:29:53.301533 247 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
I0213 03:29:53.301598 247 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0213 03:29:53.301607 247 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0213 03:29:53.301613 247 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0213 03:29:53.301617 247 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
I0213 03:29:53.301622 247 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001654422s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
main.main
runtime.main
runtime/proc.go:285
runtime.goexit
runtime/asm_amd64.s:1693
ERROR: Failed to create KinD cluster
can anyone please help me with this?

3 replies
J(
joey (archestra team)11:16 AMOpen in Slack
hi there 👋 this is a known issue with the latest KinD version defaulting to Kubernetes v1.35.0, which drops cgroup v1 support — causing the kubelet to fail to start on many Docker Desktop setups.
I have a fix shipping shortly (PR) that'll pin the kind node image to a compatible version - this fix will be included in archestra v1.0.44
in the meantime, try increasing Docker Desktop memory to 4+ GB and restarting Docker. If that still doesn't work, you can try our developer quickstart guide (which uses tilt)
J(
joey (archestra team)2:17 PMOpen in Slack
1.0.44 is out, please try pull the latest :latest tag of the docker image and retrying
JP
Jiten Purswani2:42 PMOpen in Slack
Thank you so much its working now
:archestra-love:1