#general

Feb 12February 13, 2026Feb 14Latest
JP
Jiten Purswani4:07 AMOpen in Slack
Hi
i am getting this error while running the quickstart command
Deleted nodes: ["archestra-mcp-control-plane"]
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged archestra-mcp-control-plane kubeadm init --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0213 03:29:53.144936 247 initconfiguration.go:260] loading configuration from "/kind/kubeadm.conf"
W0213 03:29:53.145640 247 common.go:100] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old-config-file --new-config new-config-file', which will write the new, similar spec using a newer API version.
W0213 03:29:53.146518 247 common.go:100] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old-config-file --new-config new-config-file', which will write the new, similar spec using a newer API version.
W0213 03:29:53.147052 247 common.go:100] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "JoinConfiguration"). Please use 'kubeadm config migrate --old-config old-config-file --new-config new-config-file', which will write the new, similar spec using a newer API version.
W0213 03:29:53.147439 247 initconfiguration.go:361] [config] WARNING: Ignored configuration document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.35.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0213 03:29:53.149466 247 certs.go:111] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0213 03:29:53.186784 247 certs.go:472] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [archestra-mcp-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.20.0.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0213 03:29:53.392276 247 certs.go:111] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0213 03:29:53.448463 247 certs.go:472] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0213 03:29:53.496347 247 certs.go:111] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0213 03:29:53.620613 247 certs.go:472] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [archestra-mcp-control-plane localhost] and IPs [172.20.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [archestra-mcp-control-plane localhost] and IPs [172.20.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0213 03:29:54.030291 247 certs.go:77] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0213 03:29:54.251275 247 kubeconfig.go:111] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0213 03:29:54.376731 247 kubeconfig.go:111] creating kubeconfig file for super-admin.conf
[kubeconfig] Writing "super-admin.conf" kubeconfig file
I0213 03:29:54.455445 247 kubeconfig.go:111] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0213 03:29:54.496423 247 kubeconfig.go:111] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0213 03:29:54.541698 247 kubeconfig.go:111] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0213 03:29:54.254741 247 local.go:67] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0213 03:29:54.254814 247 manifests.go:125] [control-plane] getting StaticPodSpecs
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0213 03:29:54.256137 247 certs.go:472] validating certificate period for CA certificate
I0213 03:29:54.256296 247 manifests.go:151] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0213 03:29:54.257162 247 manifests.go:151] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0213 03:29:54.257171 247 manifests.go:151] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0213 03:29:54.257174 247 manifests.go:151] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0213 03:29:54.257178 247 manifests.go:151] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0213 03:29:54.258932 247 manifests.go:178] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0213 03:29:54.258971 247 manifests.go:125] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0213 03:29:54.259673 247 manifests.go:151] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0213 03:29:54.259707 247 manifests.go:151] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0213 03:29:54.259712 247 manifests.go:151] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0213 03:29:54.259715 247 manifests.go:151] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0213 03:29:54.259718 247 manifests.go:151] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0213 03:29:54.259720 247 manifests.go:151] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0213 03:29:54.259723 247 manifests.go:151] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0213 03:29:54.264736 247 manifests.go:178] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0213 03:29:54.264781 247 manifests.go:125] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0213 03:29:54.265009 247 manifests.go:151] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0213 03:29:54.266366 247 manifests.go:178] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0213 03:29:54.266403 247 kubelet.go:69] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0213 03:29:53.300675 247 loader.go:405] Config loaded from file: /etc/kubernetes/admin.conf
I0213 03:29:53.301533 247 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
I0213 03:29:53.301598 247 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0213 03:29:53.301607 247 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0213 03:29:53.301613 247 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0213 03:29:53.301617 247 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
I0213 03:29:53.301622 247 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001654422s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
main.main
runtime.main
runtime/proc.go:285
runtime.goexit
runtime/asm_amd64.s:1693
ERROR: Failed to create KinD cluster
can anyone please help me with this?
3 replies
AS
Aayush Sharma7:14 AMOpen in Slack
<!everyone> :rotating_light: Project Submissions for 2 FAST 2 MCP hackathon are now OPEN! ⚡
Built something blazing fast with MCP? Now’s the time to ship it 🚀
✅ Submit your project
✅ Share your demo & repo
✅ Share a short demo video about the project
:alarm_clock: Don’t wait till the deadline rush. Submit early so you can iterate, polish, and make your project stand out.
🔗 Submit your project here: https://forms.gle/23VfngcFUy1JGEjE8
❤️2🔥2
SA
Sweta Agarwal7:49 AMOpen in Slack
will all entry get participation certificate?
RK
Rayhan Kanekal11:21 AMOpen in Slack
How do we add members in the archestra.ai site for the hackathon , when i have tried to share the invitation link to my team members , they were getting an invalid link message
2 replies
AA
Archestra App1:48 PMOpen in Slack
New release! 🚀🚀🚀
1.0.44 (2026-02-13)
Features
  • add CIMD (Client ID Metadata Documents) support for MCP OAuth 2.1 (#2735) (587702c)
  • add external IdP JWKS authentication for MCP Gateway (#2767) (7da8fc1)
  • Detect external agent executions (#2737) (8f7727d)
  • make policy config subagent use multi-provider LLM support (#2627) (3641d4b)
  • msteams in 5 mins (#2646) (8cee11f)
  • *sso:* add RP-Initiated Logout to terminate IdP session on sign-out (#2738) (7ae99a4)
Bug Fixes
  • backport a2a executor model name fix (#2761) (83e63cf)
  • correct misleading error message for block_always tool policy (#2783) (613f3d6), closes #2731
  • fix golang cve (#2730) (68ab982)
  • fix preview in new tab, avoid prop drilling (#2775) (1dd0fcd)
  • improve KinD cluster creation error messaging in Docker quickstart (#2732) (d512b30)
  • issue when handling MCP servers which contained __ in name (#2728) (d5a1f7b)
  • mobile responsiveness on mcp registry and logs pages (#2712) (5a47cb8)
  • move ngrok from build-time installation to runtime download (#2781) (5993db6)
  • pin KinD node image to v1.34.3 (#2780) (bd55050)
  • prevent swallowing provider error (#2779) (0babeed)
  • propagate correct provider in A2A/chat error responses (#2688) (307166e)
🚀1
  • skip delegations query for LLM proxy agents (#2784) (768f05f)
  • stop button terminates subagents execution (#2713) (35040e0)
Dependencies
  • bump import-in-the-middle from 2.0.6 to 3.0.0 in /platform (#2771) (4f8faa2)
  • bump jsdom from 27.4.0 to 28.0.0 in /platform (#2770) (6c134de)
Miscellaneous Chores
  • add website dev server as optional Tilt resource (#2724) (d8940d8)
  • capture MCP metrics from Archestra chat (#2718) (2bca4ca)
  • *deps:* bump qs from 6.14.1 to 6.14.2 in /platform/mcpserverdocker_image (#2773) (695bb5e)
  • polish MCP gateway logs columns (+ deduplicate parseFullToolName function) (#2719) (cc40316)
  • polishing LLM/MCP logs tables (#2725) (385f747)
  • polishing MCP gateway JWKS auth (#2782) (8596be2)
  • *release:* bump version (#2765) (d43c6c6)
  • show playwright mcp as built-in mcp, deprecate isGloballyAvailable flag (#2729) (6119bf6)
VV
Vishnu Varthan3:41 PMOpen in Slack
iadded the api key but not shown
1 reply
SP
Srinidhi P4:33 PMOpen in Slack
Hey..I have a Master Orchestration Agent connected to multiple sub-agents and a self-hosted MCP server, and orchestration works correctly when triggered from the Archestra UI chat. However, when my external frontend tries to trigger orchestration via the /api/chat endpoint, Archestra returns errors like Unauthenticated, and falls back to frontend-side mock orchestration. As a result, no agent runs or MCP tool calls appear in Archestra logs. My goal is to trigger the Master Orchestration Agent programmatically from an external frontend and receive the real orchestration response. What is the correct way to initiate a new agent orchestration session via API so it runs fully inside Archestra so that i can have an orchestration first system rather than a chat based ui first system
4 replies
T
1. Failed to connect to MCP server atlassian__remote-mcp: Streamable HTTP error: Error POSTing to endpoint: Missing sessionId parameter

Read-only live mirror of Archestra.AI Slack

👋Join the discussion withAI enthusiasts!