=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 14:41:26.423517 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:43:42.554986 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:44:10.266695 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.175069 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.181592 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.193154 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.214555 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.256095 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.337617 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.499208 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:18.821016 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:19.463093 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:20.744701 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:23.306458 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:28.428809 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:38.670099 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:45:59.151623 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:46:40.114096 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:48:02.038469 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-831661/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 14:48:42.554842 1252934 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/addons-386332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.203306967s)
-- stdout --
* [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22122
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-562018" primary control-plane node in "functional-562018" cluster
* Pulling base image v0.0.48-1765275396-22083 ...
* Found network options:
- HTTP_PROXY=localhost:39059
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:39059 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-562018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-562018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001226262s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001274207s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001274207s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-562018
helpers_test.go:244: (dbg) docker inspect functional-562018:
-- stdout --
[
{
"Id": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
"Created": "2025-12-13T14:41:15.451086653Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1291703,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-13T14:41:15.527927053Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
"ResolvConfPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hostname",
"HostsPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/hosts",
"LogPath": "/var/lib/docker/containers/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648/2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648-json.log",
"Name": "/functional-562018",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-562018:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-562018",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "2cd1277ca7835deac6e1a0712709fad92e97d38f446ee5c3cc72ac6181ca9648",
"LowerDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e-init/diff:/var/lib/docker/overlay2/c0934f2670944d6bb805127f8bececf15cca5fbe6c7475bcf7d1f046da690cb9/diff",
"MergedDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/merged",
"UpperDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/diff",
"WorkDir": "/var/lib/docker/overlay2/6e34607e0d0cb4a559270578e1baa649dac4d4d9bc1c6a4a456e6ab79b822a2e/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-562018",
"Source": "/var/lib/docker/volumes/functional-562018/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-562018",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-562018",
"name.minikube.sigs.k8s.io": "functional-562018",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "f4b22297a29553cdd0dbc4eaa766abcdb1e67465ee18f1e2f5f9917dc8cf6d08",
"SandboxKey": "/var/run/docker/netns/f4b22297a295",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33918"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33919"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33922"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33920"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33921"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-562018": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "62:f3:95:ff:30:00",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "bd2e7aed753870546b4bccdeed8073ad5795c14334e906d06b964f72dc448c38",
"EndpointID": "a0e947c1e40f773105c811b67b7d1d63f19d3a20060380bbde944bf9bfe39be5",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-562018",
"2cd1277ca783"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-562018 -n functional-562018: exit status 6 (309.892653ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1213 14:49:31.005267 1296774 status.go:458] kubeconfig endpoint: get endpoint: "functional-562018" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-562018 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-831661 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
│ ssh │ functional-831661 ssh sudo cat /etc/ssl/certs/12529342.pem │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
│ image │ functional-831661 image ls │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
│ ssh │ functional-831661 ssh sudo cat /usr/share/ca-certificates/12529342.pem │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:40 UTC │
│ ssh │ functional-831661 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image load --daemon kicbase/echo-server:functional-831661 --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:40 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image ls │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image save kicbase/echo-server:functional-831661 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image rm kicbase/echo-server:functional-831661 --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ update-context │ functional-831661 update-context --alsologtostderr -v=2 │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image ls │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ update-context │ functional-831661 update-context --alsologtostderr -v=2 │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ update-context │ functional-831661 update-context --alsologtostderr -v=2 │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image ls │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image save --daemon kicbase/echo-server:functional-831661 --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image ls --format json --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image ls --format short --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image ls --format table --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ ssh │ functional-831661 ssh pgrep buildkitd │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ │
│ image │ functional-831661 image ls --format yaml --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image build -t localhost/my-image:functional-831661 testdata/build --alsologtostderr │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ image │ functional-831661 image ls │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ delete │ -p functional-831661 │ functional-831661 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ 13 Dec 25 14:41 UTC │
│ start │ -p functional-562018 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-562018 │ jenkins │ v1.37.0 │ 13 Dec 25 14:41 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/13 14:41:10
Running on machine: ip-172-31-29-130
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1213 14:41:10.536352 1291317 out.go:360] Setting OutFile to fd 1 ...
I1213 14:41:10.536463 1291317 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:41:10.536467 1291317 out.go:374] Setting ErrFile to fd 2...
I1213 14:41:10.536471 1291317 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 14:41:10.536759 1291317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22122-1251074/.minikube/bin
I1213 14:41:10.537159 1291317 out.go:368] Setting JSON to false
I1213 14:41:10.537974 1291317 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23019,"bootTime":1765613851,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I1213 14:41:10.538032 1291317 start.go:143] virtualization:
I1213 14:41:10.542681 1291317 out.go:179] * [functional-562018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1213 14:41:10.547211 1291317 out.go:179] - MINIKUBE_LOCATION=22122
I1213 14:41:10.547327 1291317 notify.go:221] Checking for updates...
I1213 14:41:10.554065 1291317 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1213 14:41:10.557348 1291317 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22122-1251074/kubeconfig
I1213 14:41:10.560675 1291317 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22122-1251074/.minikube
I1213 14:41:10.563856 1291317 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1213 14:41:10.567020 1291317 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1213 14:41:10.570333 1291317 driver.go:422] Setting default libvirt URI to qemu:///system
I1213 14:41:10.593803 1291317 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1213 14:41:10.593920 1291317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 14:41:10.659445 1291317 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 14:41:10.65054644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 14:41:10.659539 1291317 docker.go:319] overlay module found
I1213 14:41:10.663023 1291317 out.go:179] * Using the docker driver based on user configuration
I1213 14:41:10.665952 1291317 start.go:309] selected driver: docker
I1213 14:41:10.665960 1291317 start.go:927] validating driver "docker" against <nil>
I1213 14:41:10.665972 1291317 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1213 14:41:10.666736 1291317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 14:41:10.720114 1291317 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 14:41:10.711434155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 14:41:10.720256 1291317 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1213 14:41:10.720471 1291317 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1213 14:41:10.723664 1291317 out.go:179] * Using Docker driver with root privileges
I1213 14:41:10.726526 1291317 cni.go:84] Creating CNI manager for ""
I1213 14:41:10.726586 1291317 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1213 14:41:10.726594 1291317 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1213 14:41:10.726672 1291317 start.go:353] cluster config:
{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 14:41:10.729831 1291317 out.go:179] * Starting "functional-562018" primary control-plane node in "functional-562018" cluster
I1213 14:41:10.732772 1291317 cache.go:134] Beginning downloading kic base image for docker with containerd
I1213 14:41:10.735803 1291317 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
I1213 14:41:10.738708 1291317 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 14:41:10.738744 1291317 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
I1213 14:41:10.738760 1291317 cache.go:65] Caching tarball of preloaded images
I1213 14:41:10.738800 1291317 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
I1213 14:41:10.738843 1291317 preload.go:238] Found /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1213 14:41:10.738853 1291317 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
I1213 14:41:10.739197 1291317 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
I1213 14:41:10.739214 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json: {Name:mka487a9cc8c41f7613c6f5f9d1fe183d2b5e51b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 14:41:10.761129 1291317 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
I1213 14:41:10.761147 1291317 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
I1213 14:41:10.761168 1291317 cache.go:243] Successfully downloaded all kic artifacts
I1213 14:41:10.761197 1291317 start.go:360] acquireMachinesLock for functional-562018: {Name:mk6a7956e4fce5d8e0f4d6fe039ab67ad6cd688b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 14:41:10.761309 1291317 start.go:364] duration metric: took 97.729µs to acquireMachinesLock for "functional-562018"
I1213 14:41:10.761331 1291317 start.go:93] Provisioning new machine with config: &{Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1213 14:41:10.761394 1291317 start.go:125] createHost starting for "" (driver="docker")
I1213 14:41:10.764952 1291317 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1213 14:41:10.765226 1291317 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:39059 to docker env.
I1213 14:41:10.765251 1291317 start.go:159] libmachine.API.Create for "functional-562018" (driver="docker")
I1213 14:41:10.765271 1291317 client.go:173] LocalClient.Create starting
I1213 14:41:10.765335 1291317 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem
I1213 14:41:10.765365 1291317 main.go:143] libmachine: Decoding PEM data...
I1213 14:41:10.765378 1291317 main.go:143] libmachine: Parsing certificate...
I1213 14:41:10.765439 1291317 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem
I1213 14:41:10.765454 1291317 main.go:143] libmachine: Decoding PEM data...
I1213 14:41:10.765464 1291317 main.go:143] libmachine: Parsing certificate...
I1213 14:41:10.765817 1291317 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 14:41:10.783804 1291317 cli_runner.go:211] docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 14:41:10.783877 1291317 network_create.go:284] running [docker network inspect functional-562018] to gather additional debugging logs...
I1213 14:41:10.783893 1291317 cli_runner.go:164] Run: docker network inspect functional-562018
W1213 14:41:10.800008 1291317 cli_runner.go:211] docker network inspect functional-562018 returned with exit code 1
I1213 14:41:10.800029 1291317 network_create.go:287] error running [docker network inspect functional-562018]: docker network inspect functional-562018: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-562018 not found
I1213 14:41:10.800041 1291317 network_create.go:289] output of [docker network inspect functional-562018]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-562018 not found
** /stderr **
I1213 14:41:10.800151 1291317 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 14:41:10.816198 1291317 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018b3fa0}
I1213 14:41:10.816231 1291317 network_create.go:124] attempt to create docker network functional-562018 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1213 14:41:10.816288 1291317 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-562018 functional-562018
I1213 14:41:10.879416 1291317 network_create.go:108] docker network functional-562018 192.168.49.0/24 created
I1213 14:41:10.879438 1291317 kic.go:121] calculated static IP "192.168.49.2" for the "functional-562018" container
I1213 14:41:10.879516 1291317 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1213 14:41:10.894192 1291317 cli_runner.go:164] Run: docker volume create functional-562018 --label name.minikube.sigs.k8s.io=functional-562018 --label created_by.minikube.sigs.k8s.io=true
I1213 14:41:10.912069 1291317 oci.go:103] Successfully created a docker volume functional-562018
I1213 14:41:10.912160 1291317 cli_runner.go:164] Run: docker run --rm --name functional-562018-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-562018 --entrypoint /usr/bin/test -v functional-562018:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
I1213 14:41:11.474739 1291317 oci.go:107] Successfully prepared a docker volume functional-562018
I1213 14:41:11.474794 1291317 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 14:41:11.474801 1291317 kic.go:194] Starting extracting preloaded images to volume ...
I1213 14:41:11.474874 1291317 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-562018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
I1213 14:41:15.364284 1291317 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22122-1251074/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-562018:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.889376932s)
I1213 14:41:15.364306 1291317 kic.go:203] duration metric: took 3.889500991s to extract preloaded images to volume ...
W1213 14:41:15.364459 1291317 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1213 14:41:15.364573 1291317 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1213 14:41:15.432829 1291317 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-562018 --name functional-562018 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-562018 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-562018 --network functional-562018 --ip 192.168.49.2 --volume functional-562018:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
I1213 14:41:15.748253 1291317 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Running}}
I1213 14:41:15.771512 1291317 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
I1213 14:41:15.798715 1291317 cli_runner.go:164] Run: docker exec functional-562018 stat /var/lib/dpkg/alternatives/iptables
I1213 14:41:15.855627 1291317 oci.go:144] the created container "functional-562018" has a running status.
I1213 14:41:15.855646 1291317 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa...
I1213 14:41:16.480373 1291317 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1213 14:41:16.500182 1291317 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
I1213 14:41:16.517859 1291317 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1213 14:41:16.517870 1291317 kic_runner.go:114] Args: [docker exec --privileged functional-562018 chown docker:docker /home/docker/.ssh/authorized_keys]
I1213 14:41:16.558915 1291317 cli_runner.go:164] Run: docker container inspect functional-562018 --format={{.State.Status}}
I1213 14:41:16.576188 1291317 machine.go:94] provisionDockerMachine start ...
I1213 14:41:16.576293 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 14:41:16.593933 1291317 main.go:143] libmachine: Using SSH client type: native
I1213 14:41:16.594265 1291317 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33918 <nil> <nil>}
I1213 14:41:16.594272 1291317 main.go:143] libmachine: About to run SSH command:
hostname
I1213 14:41:16.594898 1291317 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40720->127.0.0.1:33918: read: connection reset by peer
I1213 14:41:19.747043 1291317 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
I1213 14:41:19.747065 1291317 ubuntu.go:182] provisioning hostname "functional-562018"
I1213 14:41:19.747156 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 14:41:19.764757 1291317 main.go:143] libmachine: Using SSH client type: native
I1213 14:41:19.765076 1291317 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33918 <nil> <nil>}
I1213 14:41:19.765085 1291317 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-562018 && echo "functional-562018" | sudo tee /etc/hostname
I1213 14:41:19.924377 1291317 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-562018
I1213 14:41:19.924445 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 14:41:19.941925 1291317 main.go:143] libmachine: Using SSH client type: native
I1213 14:41:19.942241 1291317 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33918 <nil> <nil>}
I1213 14:41:19.942255 1291317 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-562018' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-562018/g' /etc/hosts;
else
echo '127.0.1.1 functional-562018' | sudo tee -a /etc/hosts;
fi
fi
I1213 14:41:20.099938 1291317 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1213 14:41:20.099953 1291317 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22122-1251074/.minikube CaCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22122-1251074/.minikube}
I1213 14:41:20.099991 1291317 ubuntu.go:190] setting up certificates
I1213 14:41:20.100001 1291317 provision.go:84] configureAuth start
I1213 14:41:20.100068 1291317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
I1213 14:41:20.118150 1291317 provision.go:143] copyHostCerts
I1213 14:41:20.118219 1291317 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem, removing ...
I1213 14:41:20.118226 1291317 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem
I1213 14:41:20.118306 1291317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.pem (1082 bytes)
I1213 14:41:20.118426 1291317 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem, removing ...
I1213 14:41:20.118431 1291317 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem
I1213 14:41:20.118459 1291317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/cert.pem (1123 bytes)
I1213 14:41:20.118516 1291317 exec_runner.go:144] found /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem, removing ...
I1213 14:41:20.118519 1291317 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem
I1213 14:41:20.118542 1291317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22122-1251074/.minikube/key.pem (1675 bytes)
I1213 14:41:20.118592 1291317 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem org=jenkins.functional-562018 san=[127.0.0.1 192.168.49.2 functional-562018 localhost minikube]
I1213 14:41:20.318865 1291317 provision.go:177] copyRemoteCerts
I1213 14:41:20.318922 1291317 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1213 14:41:20.318969 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 14:41:20.336603 1291317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
I1213 14:41:20.439060 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1213 14:41:20.456521 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1213 14:41:20.473839 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1213 14:41:20.491254 1291317 provision.go:87] duration metric: took 391.229515ms to configureAuth
I1213 14:41:20.491271 1291317 ubuntu.go:206] setting minikube options for container-runtime
I1213 14:41:20.491476 1291317 config.go:182] Loaded profile config "functional-562018": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 14:41:20.491484 1291317 machine.go:97] duration metric: took 3.915285318s to provisionDockerMachine
I1213 14:41:20.491490 1291317 client.go:176] duration metric: took 9.726214447s to LocalClient.Create
I1213 14:41:20.491503 1291317 start.go:167] duration metric: took 9.726252206s to libmachine.API.Create "functional-562018"
I1213 14:41:20.491509 1291317 start.go:293] postStartSetup for "functional-562018" (driver="docker")
I1213 14:41:20.491526 1291317 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1213 14:41:20.491573 1291317 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1213 14:41:20.491615 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 14:41:20.508801 1291317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
I1213 14:41:20.615257 1291317 ssh_runner.go:195] Run: cat /etc/os-release
I1213 14:41:20.618550 1291317 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1213 14:41:20.618568 1291317 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1213 14:41:20.618579 1291317 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/addons for local assets ...
I1213 14:41:20.618635 1291317 filesync.go:126] Scanning /home/jenkins/minikube-integration/22122-1251074/.minikube/files for local assets ...
I1213 14:41:20.618725 1291317 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem -> 12529342.pem in /etc/ssl/certs
I1213 14:41:20.618819 1291317 filesync.go:149] local asset: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts -> hosts in /etc/test/nested/copy/1252934
I1213 14:41:20.618862 1291317 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1252934
I1213 14:41:20.626510 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /etc/ssl/certs/12529342.pem (1708 bytes)
I1213 14:41:20.643629 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/test/nested/copy/1252934/hosts --> /etc/test/nested/copy/1252934/hosts (40 bytes)
I1213 14:41:20.660823 1291317 start.go:296] duration metric: took 169.299563ms for postStartSetup
I1213 14:41:20.661177 1291317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
I1213 14:41:20.678331 1291317 profile.go:143] Saving config to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/config.json ...
I1213 14:41:20.678636 1291317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1213 14:41:20.678684 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 14:41:20.695728 1291317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
I1213 14:41:20.796214 1291317 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1213 14:41:20.801020 1291317 start.go:128] duration metric: took 10.039612701s to createHost
I1213 14:41:20.801036 1291317 start.go:83] releasing machines lock for "functional-562018", held for 10.039719948s
I1213 14:41:20.801105 1291317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-562018
I1213 14:41:20.821828 1291317 out.go:179] * Found network options:
I1213 14:41:20.824876 1291317 out.go:179] - HTTP_PROXY=localhost:39059
W1213 14:41:20.827760 1291317 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1213 14:41:20.830604 1291317 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1213 14:41:20.833551 1291317 ssh_runner.go:195] Run: cat /version.json
I1213 14:41:20.833592 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 14:41:20.833601 1291317 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1213 14:41:20.833650 1291317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-562018
I1213 14:41:20.860119 1291317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
I1213 14:41:20.861424 1291317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33918 SSHKeyPath:/home/jenkins/minikube-integration/22122-1251074/.minikube/machines/functional-562018/id_rsa Username:docker}
I1213 14:41:21.050374 1291317 ssh_runner.go:195] Run: systemctl --version
I1213 14:41:21.056838 1291317 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1213 14:41:21.061240 1291317 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1213 14:41:21.061303 1291317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1213 14:41:21.088779 1291317 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1213 14:41:21.088802 1291317 start.go:496] detecting cgroup driver to use...
I1213 14:41:21.088834 1291317 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1213 14:41:21.088893 1291317 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1213 14:41:21.104109 1291317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1213 14:41:21.117080 1291317 docker.go:218] disabling cri-docker service (if available) ...
I1213 14:41:21.117133 1291317 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1213 14:41:21.134784 1291317 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1213 14:41:21.153477 1291317 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1213 14:41:21.272360 1291317 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1213 14:41:21.392638 1291317 docker.go:234] disabling docker service ...
I1213 14:41:21.392701 1291317 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1213 14:41:21.414118 1291317 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1213 14:41:21.427448 1291317 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1213 14:41:21.550055 1291317 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1213 14:41:21.671395 1291317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1213 14:41:21.683908 1291317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1213 14:41:21.698098 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1213 14:41:21.707263 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1213 14:41:21.717101 1291317 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1213 14:41:21.717165 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1213 14:41:21.726164 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1213 14:41:21.734973 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1213 14:41:21.743567 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1213 14:41:21.752005 1291317 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1213 14:41:21.760444 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1213 14:41:21.769373 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1213 14:41:21.778366 1291317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1213 14:41:21.787216 1291317 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1213 14:41:21.794474 1291317 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1213 14:41:21.801928 1291317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 14:41:21.906989 1291317 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1213 14:41:22.043619 1291317 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1213 14:41:22.043692 1291317 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1213 14:41:22.047622 1291317 start.go:564] Will wait 60s for crictl version
I1213 14:41:22.047678 1291317 ssh_runner.go:195] Run: which crictl
I1213 14:41:22.051506 1291317 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1213 14:41:22.076872 1291317 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1213 14:41:22.076938 1291317 ssh_runner.go:195] Run: containerd --version
I1213 14:41:22.099670 1291317 ssh_runner.go:195] Run: containerd --version
I1213 14:41:22.123011 1291317 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
I1213 14:41:22.125955 1291317 cli_runner.go:164] Run: docker network inspect functional-562018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 14:41:22.141441 1291317 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1213 14:41:22.145215 1291317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1213 14:41:22.154868 1291317 kubeadm.go:884] updating cluster {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1213 14:41:22.154988 1291317 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 14:41:22.155052 1291317 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 14:41:22.179490 1291317 containerd.go:627] all images are preloaded for containerd runtime.
I1213 14:41:22.179502 1291317 containerd.go:534] Images already preloaded, skipping extraction
I1213 14:41:22.179561 1291317 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 14:41:22.203593 1291317 containerd.go:627] all images are preloaded for containerd runtime.
I1213 14:41:22.203607 1291317 cache_images.go:86] Images are preloaded, skipping loading
I1213 14:41:22.203613 1291317 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1213 14:41:22.203701 1291317 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-562018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1213 14:41:22.203765 1291317 ssh_runner.go:195] Run: sudo crictl info
I1213 14:41:22.232840 1291317 cni.go:84] Creating CNI manager for ""
I1213 14:41:22.232850 1291317 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1213 14:41:22.232870 1291317 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1213 14:41:22.232892 1291317 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-562018 NodeName:functional-562018 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1213 14:41:22.232999 1291317 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-562018"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1213 14:41:22.233066 1291317 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1213 14:41:22.240871 1291317 binaries.go:51] Found k8s binaries, skipping transfer
I1213 14:41:22.240931 1291317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1213 14:41:22.248588 1291317 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1213 14:41:22.261453 1291317 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1213 14:41:22.274286 1291317 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1213 14:41:22.287123 1291317 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1213 14:41:22.290857 1291317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1213 14:41:22.300507 1291317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 14:41:22.408361 1291317 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1213 14:41:22.424556 1291317 certs.go:69] Setting up /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018 for IP: 192.168.49.2
I1213 14:41:22.424567 1291317 certs.go:195] generating shared ca certs ...
I1213 14:41:22.424582 1291317 certs.go:227] acquiring lock for ca certs: {Name:mkf797b728375051472268cde480fb2de1960a3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 14:41:22.424712 1291317 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key
I1213 14:41:22.424751 1291317 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key
I1213 14:41:22.424757 1291317 certs.go:257] generating profile certs ...
I1213 14:41:22.424814 1291317 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key
I1213 14:41:22.424822 1291317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt with IP's: []
I1213 14:41:22.806904 1291317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt ...
I1213 14:41:22.806922 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.crt: {Name:mk5ab195bf1a7056b153a6bbf68eee9801937361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 14:41:22.807131 1291317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key ...
I1213 14:41:22.807138 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/client.key: {Name:mkb108d763016aabf0c2fbb9da04655d4ad7bb8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 14:41:22.807232 1291317 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee
I1213 14:41:22.807244 1291317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt.d0505aee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1213 14:41:23.082234 1291317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt.d0505aee ...
I1213 14:41:23.082257 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt.d0505aee: {Name:mkc9642dd6a076d13a01d0176e4833c78b56f473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 14:41:23.082459 1291317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee ...
I1213 14:41:23.082467 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee: {Name:mkd82d675c96c67f08f95e66a204f12bd06128cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 14:41:23.082561 1291317 certs.go:382] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt.d0505aee -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt
I1213 14:41:23.082634 1291317 certs.go:386] copying /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key.d0505aee -> /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key
I1213 14:41:23.082704 1291317 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key
I1213 14:41:23.082721 1291317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt with IP's: []
I1213 14:41:23.386520 1291317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt ...
I1213 14:41:23.386536 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt: {Name:mk38f920eb041bcd85320119d02a87fad63b434a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 14:41:23.386712 1291317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key ...
I1213 14:41:23.386720 1291317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key: {Name:mk067b9e8d42c9dea7e4b5defd6063b282b5adbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 14:41:23.386896 1291317 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem (1338 bytes)
W1213 14:41:23.386935 1291317 certs.go:480] ignoring /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934_empty.pem, impossibly tiny 0 bytes
I1213 14:41:23.386942 1291317 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca-key.pem (1675 bytes)
I1213 14:41:23.386971 1291317 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/ca.pem (1082 bytes)
I1213 14:41:23.386993 1291317 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/cert.pem (1123 bytes)
I1213 14:41:23.387016 1291317 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/key.pem (1675 bytes)
I1213 14:41:23.387059 1291317 certs.go:484] found cert: /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem (1708 bytes)
I1213 14:41:23.387752 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1213 14:41:23.408658 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1213 14:41:23.427993 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1213 14:41:23.446668 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1213 14:41:23.464719 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1213 14:41:23.482423 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1213 14:41:23.499968 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1213 14:41:23.517971 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/profiles/functional-562018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1213 14:41:23.536190 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1213 14:41:23.554570 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/certs/1252934.pem --> /usr/share/ca-certificates/1252934.pem (1338 bytes)
I1213 14:41:23.572670 1291317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22122-1251074/.minikube/files/etc/ssl/certs/12529342.pem --> /usr/share/ca-certificates/12529342.pem (1708 bytes)
I1213 14:41:23.590516 1291317 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1213 14:41:23.603832 1291317 ssh_runner.go:195] Run: openssl version
I1213 14:41:23.610392 1291317 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1213 14:41:23.618129 1291317 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1213 14:41:23.625535 1291317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1213 14:41:23.629179 1291317 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 14:31 /usr/share/ca-certificates/minikubeCA.pem
I1213 14:41:23.629234 1291317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1213 14:41:23.670264 1291317 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1213 14:41:23.677744 1291317 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1213 14:41:23.685265 1291317 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1252934.pem
I1213 14:41:23.692739 1291317 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1252934.pem /etc/ssl/certs/1252934.pem
I1213 14:41:23.700339 1291317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1252934.pem
I1213 14:41:23.704057 1291317 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 14:41 /usr/share/ca-certificates/1252934.pem
I1213 14:41:23.704130 1291317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1252934.pem
I1213 14:41:23.744828 1291317 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1213 14:41:23.752071 1291317 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1252934.pem /etc/ssl/certs/51391683.0
I1213 14:41:23.759251 1291317 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12529342.pem
I1213 14:41:23.766818 1291317 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12529342.pem /etc/ssl/certs/12529342.pem
I1213 14:41:23.774446 1291317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12529342.pem
I1213 14:41:23.778137 1291317 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 14:41 /usr/share/ca-certificates/12529342.pem
I1213 14:41:23.778213 1291317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12529342.pem
I1213 14:41:23.821193 1291317 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1213 14:41:23.828637 1291317 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12529342.pem /etc/ssl/certs/3ec20f2e.0
I1213 14:41:23.835987 1291317 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1213 14:41:23.839834 1291317 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1213 14:41:23.839887 1291317 kubeadm.go:401] StartCluster: {Name:functional-562018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-562018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 14:41:23.839966 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1213 14:41:23.840026 1291317 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1213 14:41:23.866656 1291317 cri.go:89] found id: ""
I1213 14:41:23.866717 1291317 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1213 14:41:23.874680 1291317 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1213 14:41:23.882664 1291317 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1213 14:41:23.882737 1291317 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1213 14:41:23.890810 1291317 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1213 14:41:23.890821 1291317 kubeadm.go:158] found existing configuration files:
I1213 14:41:23.890896 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1213 14:41:23.898898 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1213 14:41:23.898961 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1213 14:41:23.906510 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1213 14:41:23.914523 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1213 14:41:23.914589 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1213 14:41:23.922447 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1213 14:41:23.930721 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1213 14:41:23.930780 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1213 14:41:23.938490 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1213 14:41:23.946556 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1213 14:41:23.946617 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1213 14:41:23.954295 1291317 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1213 14:41:23.995097 1291317 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1213 14:41:23.995542 1291317 kubeadm.go:319] [preflight] Running pre-flight checks
I1213 14:41:24.096930 1291317 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1213 14:41:24.096999 1291317 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1213 14:41:24.097033 1291317 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1213 14:41:24.097093 1291317 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1213 14:41:24.097168 1291317 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1213 14:41:24.097221 1291317 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1213 14:41:24.097278 1291317 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1213 14:41:24.097328 1291317 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1213 14:41:24.097384 1291317 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1213 14:41:24.097428 1291317 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1213 14:41:24.097475 1291317 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1213 14:41:24.097528 1291317 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1213 14:41:24.172417 1291317 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1213 14:41:24.172520 1291317 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1213 14:41:24.172635 1291317 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1213 14:41:24.183685 1291317 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1213 14:41:24.190148 1291317 out.go:252] - Generating certificates and keys ...
I1213 14:41:24.190238 1291317 kubeadm.go:319] [certs] Using existing ca certificate authority
I1213 14:41:24.190303 1291317 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1213 14:41:24.470926 1291317 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1213 14:41:24.607717 1291317 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1213 14:41:24.862134 1291317 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1213 14:41:24.932002 1291317 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1213 14:41:25.274699 1291317 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1213 14:41:25.274855 1291317 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-562018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1213 14:41:25.429840 1291317 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1213 14:41:25.429993 1291317 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-562018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1213 14:41:26.039786 1291317 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1213 14:41:26.294725 1291317 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1213 14:41:26.555089 1291317 kubeadm.go:319] [certs] Generating "sa" key and public key
I1213 14:41:26.555232 1291317 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1213 14:41:26.737194 1291317 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1213 14:41:27.132301 1291317 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1213 14:41:27.252866 1291317 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1213 14:41:27.409575 1291317 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1213 14:41:27.703301 1291317 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1213 14:41:27.704048 1291317 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1213 14:41:27.706873 1291317 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1213 14:41:27.710780 1291317 out.go:252] - Booting up control plane ...
I1213 14:41:27.710880 1291317 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1213 14:41:27.710956 1291317 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1213 14:41:27.711022 1291317 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1213 14:41:27.727883 1291317 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1213 14:41:27.728167 1291317 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1213 14:41:27.737323 1291317 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1213 14:41:27.738013 1291317 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1213 14:41:27.738316 1291317 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1213 14:41:27.877372 1291317 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1213 14:41:27.877486 1291317 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1213 14:45:27.878528 1291317 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001226262s
I1213 14:45:27.878549 1291317 kubeadm.go:319]
I1213 14:45:27.878605 1291317 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1213 14:45:27.878638 1291317 kubeadm.go:319] - The kubelet is not running
I1213 14:45:27.878741 1291317 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1213 14:45:27.878745 1291317 kubeadm.go:319]
I1213 14:45:27.878848 1291317 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1213 14:45:27.878879 1291317 kubeadm.go:319] - 'systemctl status kubelet'
I1213 14:45:27.878909 1291317 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1213 14:45:27.878912 1291317 kubeadm.go:319]
I1213 14:45:27.884102 1291317 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1213 14:45:27.884844 1291317 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1213 14:45:27.885034 1291317 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1213 14:45:27.885456 1291317 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1213 14:45:27.885464 1291317 kubeadm.go:319]
I1213 14:45:27.885583 1291317 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1213 14:45:27.885709 1291317 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-562018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-562018 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001226262s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1213 14:45:27.885811 1291317 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1213 14:45:28.298047 1291317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1213 14:45:28.311884 1291317 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1213 14:45:28.311946 1291317 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1213 14:45:28.320182 1291317 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1213 14:45:28.320191 1291317 kubeadm.go:158] found existing configuration files:
I1213 14:45:28.320243 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1213 14:45:28.327851 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1213 14:45:28.327911 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1213 14:45:28.335606 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1213 14:45:28.343859 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1213 14:45:28.343917 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1213 14:45:28.351548 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1213 14:45:28.359482 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1213 14:45:28.359537 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1213 14:45:28.367254 1291317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1213 14:45:28.375153 1291317 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1213 14:45:28.375225 1291317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1213 14:45:28.382672 1291317 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1213 14:45:28.423941 1291317 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1213 14:45:28.423992 1291317 kubeadm.go:319] [preflight] Running pre-flight checks
I1213 14:45:28.503918 1291317 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1213 14:45:28.503984 1291317 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1213 14:45:28.504018 1291317 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1213 14:45:28.504062 1291317 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1213 14:45:28.504109 1291317 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1213 14:45:28.504155 1291317 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1213 14:45:28.504202 1291317 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1213 14:45:28.504248 1291317 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1213 14:45:28.504302 1291317 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1213 14:45:28.504345 1291317 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1213 14:45:28.504392 1291317 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1213 14:45:28.504436 1291317 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1213 14:45:28.578183 1291317 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1213 14:45:28.578287 1291317 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1213 14:45:28.578376 1291317 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1213 14:45:28.587730 1291317 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1213 14:45:28.593187 1291317 out.go:252] - Generating certificates and keys ...
I1213 14:45:28.593294 1291317 kubeadm.go:319] [certs] Using existing ca certificate authority
I1213 14:45:28.593366 1291317 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1213 14:45:28.593463 1291317 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1213 14:45:28.593524 1291317 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1213 14:45:28.593605 1291317 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1213 14:45:28.593658 1291317 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1213 14:45:28.593726 1291317 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1213 14:45:28.593798 1291317 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1213 14:45:28.593877 1291317 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1213 14:45:28.593955 1291317 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1213 14:45:28.593998 1291317 kubeadm.go:319] [certs] Using the existing "sa" key
I1213 14:45:28.594058 1291317 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1213 14:45:28.823814 1291317 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1213 14:45:29.028475 1291317 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1213 14:45:29.231229 1291317 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1213 14:45:29.658022 1291317 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1213 14:45:30.068058 1291317 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1213 14:45:30.068793 1291317 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1213 14:45:30.071804 1291317 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1213 14:45:30.075282 1291317 out.go:252] - Booting up control plane ...
I1213 14:45:30.075406 1291317 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1213 14:45:30.075483 1291317 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1213 14:45:30.075548 1291317 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1213 14:45:30.099243 1291317 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1213 14:45:30.099378 1291317 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1213 14:45:30.108440 1291317 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1213 14:45:30.108932 1291317 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1213 14:45:30.109205 1291317 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1213 14:45:30.237905 1291317 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1213 14:45:30.238019 1291317 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1213 14:49:30.238851 1291317 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001274207s
I1213 14:49:30.238872 1291317 kubeadm.go:319]
I1213 14:49:30.238928 1291317 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1213 14:49:30.238960 1291317 kubeadm.go:319] - The kubelet is not running
I1213 14:49:30.239064 1291317 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1213 14:49:30.239068 1291317 kubeadm.go:319]
I1213 14:49:30.239204 1291317 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1213 14:49:30.239245 1291317 kubeadm.go:319] - 'systemctl status kubelet'
I1213 14:49:30.239276 1291317 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1213 14:49:30.239279 1291317 kubeadm.go:319]
I1213 14:49:30.243559 1291317 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1213 14:49:30.244038 1291317 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1213 14:49:30.244156 1291317 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1213 14:49:30.244401 1291317 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1213 14:49:30.244405 1291317 kubeadm.go:319]
I1213 14:49:30.244474 1291317 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1213 14:49:30.244539 1291317 kubeadm.go:403] duration metric: took 8m6.40465461s to StartCluster
I1213 14:49:30.244574 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1213 14:49:30.244645 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1213 14:49:30.269858 1291317 cri.go:89] found id: ""
I1213 14:49:30.269885 1291317 logs.go:282] 0 containers: []
W1213 14:49:30.269892 1291317 logs.go:284] No container was found matching "kube-apiserver"
I1213 14:49:30.269897 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1213 14:49:30.269957 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1213 14:49:30.295913 1291317 cri.go:89] found id: ""
I1213 14:49:30.295927 1291317 logs.go:282] 0 containers: []
W1213 14:49:30.295934 1291317 logs.go:284] No container was found matching "etcd"
I1213 14:49:30.295939 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1213 14:49:30.296006 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1213 14:49:30.321841 1291317 cri.go:89] found id: ""
I1213 14:49:30.321855 1291317 logs.go:282] 0 containers: []
W1213 14:49:30.321862 1291317 logs.go:284] No container was found matching "coredns"
I1213 14:49:30.321867 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1213 14:49:30.321927 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1213 14:49:30.358313 1291317 cri.go:89] found id: ""
I1213 14:49:30.358327 1291317 logs.go:282] 0 containers: []
W1213 14:49:30.358334 1291317 logs.go:284] No container was found matching "kube-scheduler"
I1213 14:49:30.358339 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1213 14:49:30.358397 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1213 14:49:30.390205 1291317 cri.go:89] found id: ""
I1213 14:49:30.390219 1291317 logs.go:282] 0 containers: []
W1213 14:49:30.390227 1291317 logs.go:284] No container was found matching "kube-proxy"
I1213 14:49:30.390232 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1213 14:49:30.390292 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1213 14:49:30.421227 1291317 cri.go:89] found id: ""
I1213 14:49:30.421242 1291317 logs.go:282] 0 containers: []
W1213 14:49:30.421250 1291317 logs.go:284] No container was found matching "kube-controller-manager"
I1213 14:49:30.421255 1291317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1213 14:49:30.421318 1291317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1213 14:49:30.446736 1291317 cri.go:89] found id: ""
I1213 14:49:30.446756 1291317 logs.go:282] 0 containers: []
W1213 14:49:30.446765 1291317 logs.go:284] No container was found matching "kindnet"
I1213 14:49:30.446775 1291317 logs.go:123] Gathering logs for kubelet ...
I1213 14:49:30.446785 1291317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1213 14:49:30.506280 1291317 logs.go:123] Gathering logs for dmesg ...
I1213 14:49:30.506300 1291317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1213 14:49:30.524074 1291317 logs.go:123] Gathering logs for describe nodes ...
I1213 14:49:30.524091 1291317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1213 14:49:30.591576 1291317 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1213 14:49:30.582607 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 14:49:30.584266 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 14:49:30.584973 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 14:49:30.586642 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 14:49:30.587613 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1213 14:49:30.582607 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 14:49:30.584266 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 14:49:30.584973 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 14:49:30.586642 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 14:49:30.587613 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1213 14:49:30.591587 1291317 logs.go:123] Gathering logs for containerd ...
I1213 14:49:30.591600 1291317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1213 14:49:30.628672 1291317 logs.go:123] Gathering logs for container status ...
I1213 14:49:30.628695 1291317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1213 14:49:30.656249 1291317 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001274207s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1213 14:49:30.656305 1291317 out.go:285] *
W1213 14:49:30.659411 1291317 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001274207s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1213 14:49:30.659450 1291317 out.go:285] *
W1213 14:49:30.661795 1291317 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1213 14:49:30.666707 1291317 out.go:203]
W1213 14:49:30.670410 1291317 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001274207s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1213 14:49:30.670458 1291317 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1213 14:49:30.670476 1291317 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1213 14:49:30.673588 1291317 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975014211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975030859Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975067593Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975081886Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975092060Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975103170Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975111965Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975121885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975138172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975169991Z" level=info msg="Connect containerd service"
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.975508085Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.976079298Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.993990477Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.994055034Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.994084490Z" level=info msg="Start subscribing containerd event"
Dec 13 14:41:21 functional-562018 containerd[763]: time="2025-12-13T14:41:21.994130454Z" level=info msg="Start recovering state"
Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.040971827Z" level=info msg="Start event monitor"
Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.041022460Z" level=info msg="Start cni network conf syncer for default"
Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.041031650Z" level=info msg="Start streaming server"
Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.041041209Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.041050004Z" level=info msg="runtime interface starting up..."
Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.041062878Z" level=info msg="starting plugins..."
Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.041083423Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 13 14:41:22 functional-562018 systemd[1]: Started containerd.service - containerd container runtime.
Dec 13 14:41:22 functional-562018 containerd[763]: time="2025-12-13T14:41:22.043229514Z" level=info msg="containerd successfully booted in 0.089795s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1213 14:49:31.638154 4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 14:49:31.638541 4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 14:49:31.639918 4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 14:49:31.640541 4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 14:49:31.642121 4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[ +18.759368] overlayfs: idmapped layers are currently not supported
[Dec13 13:37] overlayfs: idmapped layers are currently not supported
[Dec13 13:38] overlayfs: idmapped layers are currently not supported
[Dec13 13:39] overlayfs: idmapped layers are currently not supported
[Dec13 13:40] overlayfs: idmapped layers are currently not supported
[Dec13 13:42] overlayfs: idmapped layers are currently not supported
[Dec13 13:44] overlayfs: idmapped layers are currently not supported
[Dec13 13:55] overlayfs: idmapped layers are currently not supported
[Dec13 13:57] overlayfs: idmapped layers are currently not supported
[ +37.486494] overlayfs: idmapped layers are currently not supported
[ +5.749635] overlayfs: idmapped layers are currently not supported
[Dec13 13:58] overlayfs: idmapped layers are currently not supported
[Dec13 13:59] overlayfs: idmapped layers are currently not supported
[Dec13 14:00] overlayfs: idmapped layers are currently not supported
[Dec13 14:01] overlayfs: idmapped layers are currently not supported
[ +10.745175] overlayfs: idmapped layers are currently not supported
[Dec13 14:03] overlayfs: idmapped layers are currently not supported
[ +10.655903] overlayfs: idmapped layers are currently not supported
[Dec13 14:04] overlayfs: idmapped layers are currently not supported
[Dec13 14:21] overlayfs: idmapped layers are currently not supported
[Dec13 14:23] overlayfs: idmapped layers are currently not supported
[Dec13 14:25] overlayfs: idmapped layers are currently not supported
[Dec13 14:27] overlayfs: idmapped layers are currently not supported
[Dec13 14:28] overlayfs: idmapped layers are currently not supported
[Dec13 14:30] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
14:49:31 up 6:32, 0 user, load average: 0.25, 0.58, 1.06
Linux functional-562018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 13 14:49:28 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 14:49:28 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 13 14:49:28 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 14:49:28 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 14:49:28 functional-562018 kubelet[4678]: E1213 14:49:28.879760 4678 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 13 14:49:28 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 14:49:28 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 14:49:29 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 13 14:49:29 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 14:49:29 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 14:49:29 functional-562018 kubelet[4683]: E1213 14:49:29.637317 4683 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 13 14:49:29 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 14:49:29 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 14:49:30 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 13 14:49:30 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 14:49:30 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 14:49:30 functional-562018 kubelet[4716]: E1213 14:49:30.400289 4716 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 13 14:49:30 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 14:49:30 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 14:49:31 functional-562018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 13 14:49:31 functional-562018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 14:49:31 functional-562018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 14:49:31 functional-562018 kubelet[4788]: E1213 14:49:31.156130 4788 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 13 14:49:31 functional-562018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 14:49:31 functional-562018 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-562018 -n functional-562018: exit status 6 (344.998217ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1213 14:49:32.110359 1296988 status.go:458] kubeconfig endpoint: get endpoint: "functional-562018" does not appear in /home/jenkins/minikube-integration/22122-1251074/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-562018" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.64s)