=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1217 10:32:12.071102 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:34:28.205783 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:34:55.914676 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.084651 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.091421 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.102926 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.124389 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.165844 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.247408 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.408950 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:43.730706 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:44.372858 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:45.654532 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:48.216742 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:35:53.338679 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:36:03.580964 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:36:24.062666 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:37:05.024572 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:38:26.948589 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-626013/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 10:39:28.205599 2924574 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m19.831475025s)
-- stdout --
* [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22182
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-232588" primary control-plane node in "functional-232588" cluster
* Pulling base image v0.0.48-1765661130-22141 ...
* Found network options:
- HTTP_PROXY=localhost:36263
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:36263 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-232588 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-232588 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000105065s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001251054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001251054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-232588
helpers_test.go:244: (dbg) docker inspect functional-232588:
-- stdout --
[
{
"Id": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
"Created": "2025-12-17T10:31:38.417629873Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2962990,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-17T10:31:38.484538313Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
"ResolvConfPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hostname",
"HostsPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/hosts",
"LogPath": "/var/lib/docker/containers/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55/f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55-json.log",
"Name": "/functional-232588",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-232588:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-232588",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "f67a3fa8da990511ea7b9ac4d66db19d72d3058330886882a212e0ae83421f55",
"LowerDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813-init/diff:/var/lib/docker/overlay2/aa1c3cb837db05afa9c265c464cc269fa9c11658f422c1c8858e1287ac952f12/diff",
"MergedDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/merged",
"UpperDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/diff",
"WorkDir": "/var/lib/docker/overlay2/b5b8bc7a062c37ca162001e717af6f64909dfacb81545bffc90d18ac6f3a6813/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-232588",
"Source": "/var/lib/docker/volumes/functional-232588/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-232588",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-232588",
"name.minikube.sigs.k8s.io": "functional-232588",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "cf91fdea6bf1c59282af014fad74b29d2456698ebca9b6be8c9685054b7d7df4",
"SandboxKey": "/var/run/docker/netns/cf91fdea6bf1",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35733"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35734"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35737"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35735"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "35736"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-232588": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "9e:06:f9:5f:98:3e",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "cf7a63e100f8f97f0cb760b53960a5eaeb1f5054bead79442486fc1d51c01ab7",
"EndpointID": "a3f4d6de946fb68269c7790dce129934f895a840ec5cebbe87fc0d49cb575c44",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-232588",
"f67a3fa8da99"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232588 -n functional-232588: exit status 6 (308.184854ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1217 10:39:53.704401 2968083 status.go:458] kubeconfig endpoint: get endpoint: "functional-232588" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-232588 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-626013 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ ssh │ functional-626013 ssh sudo cat /etc/ssl/certs/29245742.pem │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image ls │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ ssh │ functional-626013 ssh sudo cat /usr/share/ca-certificates/29245742.pem │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image load --daemon kicbase/echo-server:functional-626013 --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ ssh │ functional-626013 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image ls │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image save kicbase/echo-server:functional-626013 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image rm kicbase/echo-server:functional-626013 --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image ls │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ update-context │ functional-626013 update-context --alsologtostderr -v=2 │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ update-context │ functional-626013 update-context --alsologtostderr -v=2 │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ update-context │ functional-626013 update-context --alsologtostderr -v=2 │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image ls │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image save --daemon kicbase/echo-server:functional-626013 --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image ls --format short --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image ls --format yaml --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ ssh │ functional-626013 ssh pgrep buildkitd │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ │
│ image │ functional-626013 image ls --format json --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image ls --format table --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image build -t localhost/my-image:functional-626013 testdata/build --alsologtostderr │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ image │ functional-626013 image ls │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ delete │ -p functional-626013 │ functional-626013 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ 17 Dec 25 10:31 UTC │
│ start │ -p functional-232588 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232588 │ jenkins │ v1.37.0 │ 17 Dec 25 10:31 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/17 10:31:33
Running on machine: ip-172-31-29-130
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1217 10:31:33.598246 2962598 out.go:360] Setting OutFile to fd 1 ...
I1217 10:31:33.598364 2962598 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:31:33.598368 2962598 out.go:374] Setting ErrFile to fd 2...
I1217 10:31:33.598371 2962598 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 10:31:33.598613 2962598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-2922712/.minikube/bin
I1217 10:31:33.599029 2962598 out.go:368] Setting JSON to false
I1217 10:31:33.599862 2962598 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":58444,"bootTime":1765909050,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I1217 10:31:33.599922 2962598 start.go:143] virtualization:
I1217 10:31:33.604102 2962598 out.go:179] * [functional-232588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1217 10:31:33.608458 2962598 out.go:179] - MINIKUBE_LOCATION=22182
I1217 10:31:33.608568 2962598 notify.go:221] Checking for updates...
I1217 10:31:33.615460 2962598 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1217 10:31:33.618535 2962598 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22182-2922712/kubeconfig
I1217 10:31:33.621687 2962598 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-2922712/.minikube
I1217 10:31:33.624781 2962598 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1217 10:31:33.627732 2962598 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1217 10:31:33.630855 2962598 driver.go:422] Setting default libvirt URI to qemu:///system
I1217 10:31:33.659354 2962598 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1217 10:31:33.659494 2962598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 10:31:33.718316 2962598 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 10:31:33.708911822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 10:31:33.718410 2962598 docker.go:319] overlay module found
I1217 10:31:33.721733 2962598 out.go:179] * Using the docker driver based on user configuration
I1217 10:31:33.724633 2962598 start.go:309] selected driver: docker
I1217 10:31:33.724641 2962598 start.go:927] validating driver "docker" against <nil>
I1217 10:31:33.724673 2962598 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1217 10:31:33.725398 2962598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 10:31:33.778807 2962598 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 10:31:33.769884695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 10:31:33.778949 2962598 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1217 10:31:33.779161 2962598 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1217 10:31:33.782197 2962598 out.go:179] * Using Docker driver with root privileges
I1217 10:31:33.785123 2962598 cni.go:84] Creating CNI manager for ""
I1217 10:31:33.785183 2962598 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1217 10:31:33.785190 2962598 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1217 10:31:33.785314 2962598 start.go:353] cluster config:
{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1217 10:31:33.788604 2962598 out.go:179] * Starting "functional-232588" primary control-plane node in "functional-232588" cluster
I1217 10:31:33.791438 2962598 cache.go:134] Beginning downloading kic base image for docker with containerd
I1217 10:31:33.794332 2962598 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
I1217 10:31:33.797354 2962598 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1217 10:31:33.797355 2962598 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
I1217 10:31:33.797401 2962598 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
I1217 10:31:33.797425 2962598 cache.go:65] Caching tarball of preloaded images
I1217 10:31:33.797512 2962598 preload.go:238] Found /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1217 10:31:33.797521 2962598 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
I1217 10:31:33.797859 2962598 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
I1217 10:31:33.797876 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json: {Name:mk49253aa6bfdc09f9bf70cb1e55f0e79c85a4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 10:31:33.816497 2962598 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
I1217 10:31:33.816514 2962598 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
I1217 10:31:33.816533 2962598 cache.go:243] Successfully downloaded all kic artifacts
I1217 10:31:33.816562 2962598 start.go:360] acquireMachinesLock for functional-232588: {Name:mkb7828f32963a62377c74058da795e63eb677f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1217 10:31:33.816671 2962598 start.go:364] duration metric: took 95.407µs to acquireMachinesLock for "functional-232588"
I1217 10:31:33.816695 2962598 start.go:93] Provisioning new machine with config: &{Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1217 10:31:33.816763 2962598 start.go:125] createHost starting for "" (driver="docker")
I1217 10:31:33.820275 2962598 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1217 10:31:33.820609 2962598 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:36263 to docker env.
I1217 10:31:33.820636 2962598 start.go:159] libmachine.API.Create for "functional-232588" (driver="docker")
I1217 10:31:33.820656 2962598 client.go:173] LocalClient.Create starting
I1217 10:31:33.820725 2962598 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem
I1217 10:31:33.820773 2962598 main.go:143] libmachine: Decoding PEM data...
I1217 10:31:33.820790 2962598 main.go:143] libmachine: Parsing certificate...
I1217 10:31:33.820839 2962598 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem
I1217 10:31:33.820861 2962598 main.go:143] libmachine: Decoding PEM data...
I1217 10:31:33.820872 2962598 main.go:143] libmachine: Parsing certificate...
I1217 10:31:33.821219 2962598 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1217 10:31:33.837120 2962598 cli_runner.go:211] docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1217 10:31:33.837204 2962598 network_create.go:284] running [docker network inspect functional-232588] to gather additional debugging logs...
I1217 10:31:33.837220 2962598 cli_runner.go:164] Run: docker network inspect functional-232588
W1217 10:31:33.854038 2962598 cli_runner.go:211] docker network inspect functional-232588 returned with exit code 1
I1217 10:31:33.854057 2962598 network_create.go:287] error running [docker network inspect functional-232588]: docker network inspect functional-232588: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-232588 not found
I1217 10:31:33.854069 2962598 network_create.go:289] output of [docker network inspect functional-232588]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-232588 not found
** /stderr **
I1217 10:31:33.854164 2962598 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 10:31:33.870707 2962598 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018c72a0}
I1217 10:31:33.870741 2962598 network_create.go:124] attempt to create docker network functional-232588 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1217 10:31:33.870800 2962598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-232588 functional-232588
I1217 10:31:33.940249 2962598 network_create.go:108] docker network functional-232588 192.168.49.0/24 created
I1217 10:31:33.940271 2962598 kic.go:121] calculated static IP "192.168.49.2" for the "functional-232588" container
I1217 10:31:33.940342 2962598 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1217 10:31:33.956273 2962598 cli_runner.go:164] Run: docker volume create functional-232588 --label name.minikube.sigs.k8s.io=functional-232588 --label created_by.minikube.sigs.k8s.io=true
I1217 10:31:33.974428 2962598 oci.go:103] Successfully created a docker volume functional-232588
I1217 10:31:33.974503 2962598 cli_runner.go:164] Run: docker run --rm --name functional-232588-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-232588 --entrypoint /usr/bin/test -v functional-232588:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
I1217 10:31:34.510953 2962598 oci.go:107] Successfully prepared a docker volume functional-232588
I1217 10:31:34.511025 2962598 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1217 10:31:34.511033 2962598 kic.go:194] Starting extracting preloaded images to volume ...
I1217 10:31:34.511101 2962598 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-232588:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
I1217 10:31:38.349672 2962598 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22182-2922712/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-232588:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.838536036s)
I1217 10:31:38.349694 2962598 kic.go:203] duration metric: took 3.838656878s to extract preloaded images to volume ...
W1217 10:31:38.349846 2962598 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1217 10:31:38.349963 2962598 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1217 10:31:38.401654 2962598 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-232588 --name functional-232588 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-232588 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-232588 --network functional-232588 --ip 192.168.49.2 --volume functional-232588:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
I1217 10:31:38.702497 2962598 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Running}}
I1217 10:31:38.728034 2962598 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
I1217 10:31:38.753643 2962598 cli_runner.go:164] Run: docker exec functional-232588 stat /var/lib/dpkg/alternatives/iptables
I1217 10:31:38.805029 2962598 oci.go:144] the created container "functional-232588" has a running status.
I1217 10:31:38.805083 2962598 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519...
I1217 10:31:38.809311 2962598 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519.pub --> /home/docker/.ssh/authorized_keys (81 bytes)
I1217 10:31:38.833921 2962598 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
I1217 10:31:38.859734 2962598 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1217 10:31:38.859751 2962598 kic_runner.go:114] Args: [docker exec --privileged functional-232588 chown docker:docker /home/docker/.ssh/authorized_keys]
I1217 10:31:38.910033 2962598 cli_runner.go:164] Run: docker container inspect functional-232588 --format={{.State.Status}}
I1217 10:31:38.929491 2962598 machine.go:94] provisionDockerMachine start ...
I1217 10:31:38.929580 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 10:31:38.955585 2962598 main.go:143] libmachine: Using SSH client type: native
I1217 10:31:38.955707 2962598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil> [] 0s} 127.0.0.1 35733 <nil> <nil>}
I1217 10:31:38.955713 2962598 main.go:143] libmachine: About to run SSH command:
hostname
I1217 10:31:38.957049 2962598 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56616->127.0.0.1:35733: read: connection reset by peer
I1217 10:31:42.097414 2962598 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
I1217 10:31:42.097431 2962598 ubuntu.go:182] provisioning hostname "functional-232588"
I1217 10:31:42.097521 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 10:31:42.129452 2962598 main.go:143] libmachine: Using SSH client type: native
I1217 10:31:42.129563 2962598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil> [] 0s} 127.0.0.1 35733 <nil> <nil>}
I1217 10:31:42.129571 2962598 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-232588 && echo "functional-232588" | sudo tee /etc/hostname
I1217 10:31:42.288599 2962598 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232588
I1217 10:31:42.288685 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 10:31:42.308520 2962598 main.go:143] libmachine: Using SSH client type: native
I1217 10:31:42.308637 2962598 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dc5a0] 0x3deaa0 <nil> [] 0s} 127.0.0.1 35733 <nil> <nil>}
I1217 10:31:42.308651 2962598 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-232588' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232588/g' /etc/hosts;
else
echo '127.0.1.1 functional-232588' | sudo tee -a /etc/hosts;
fi
fi
I1217 10:31:42.444695 2962598 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1217 10:31:42.444712 2962598 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22182-2922712/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-2922712/.minikube}
I1217 10:31:42.444739 2962598 ubuntu.go:190] setting up certificates
I1217 10:31:42.444751 2962598 provision.go:84] configureAuth start
I1217 10:31:42.444811 2962598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
I1217 10:31:42.460988 2962598 provision.go:143] copyHostCerts
I1217 10:31:42.461050 2962598 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem, removing ...
I1217 10:31:42.461058 2962598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem
I1217 10:31:42.461137 2962598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/key.pem (1675 bytes)
I1217 10:31:42.461232 2962598 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem, removing ...
I1217 10:31:42.461236 2962598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem
I1217 10:31:42.461263 2962598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.pem (1082 bytes)
I1217 10:31:42.461362 2962598 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem, removing ...
I1217 10:31:42.461365 2962598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem
I1217 10:31:42.461389 2962598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-2922712/.minikube/cert.pem (1123 bytes)
I1217 10:31:42.461435 2962598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem org=jenkins.functional-232588 san=[127.0.0.1 192.168.49.2 functional-232588 localhost minikube]
I1217 10:31:42.655813 2962598 provision.go:177] copyRemoteCerts
I1217 10:31:42.655884 2962598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1217 10:31:42.655927 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 10:31:42.673116 2962598 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
I1217 10:31:42.768171 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1217 10:31:42.785809 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1217 10:31:42.805122 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1217 10:31:42.823296 2962598 provision.go:87] duration metric: took 378.520339ms to configureAuth
I1217 10:31:42.823314 2962598 ubuntu.go:206] setting minikube options for container-runtime
I1217 10:31:42.823506 2962598 config.go:182] Loaded profile config "functional-232588": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 10:31:42.823513 2962598 machine.go:97] duration metric: took 3.894012436s to provisionDockerMachine
I1217 10:31:42.823519 2962598 client.go:176] duration metric: took 9.002859063s to LocalClient.Create
I1217 10:31:42.823544 2962598 start.go:167] duration metric: took 9.002908243s to libmachine.API.Create "functional-232588"
I1217 10:31:42.823559 2962598 start.go:293] postStartSetup for "functional-232588" (driver="docker")
I1217 10:31:42.823568 2962598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1217 10:31:42.823618 2962598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1217 10:31:42.823655 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 10:31:42.844318 2962598 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
I1217 10:31:42.940976 2962598 ssh_runner.go:195] Run: cat /etc/os-release
I1217 10:31:42.944626 2962598 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1217 10:31:42.944645 2962598 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1217 10:31:42.944655 2962598 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/addons for local assets ...
I1217 10:31:42.944710 2962598 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-2922712/.minikube/files for local assets ...
I1217 10:31:42.944794 2962598 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem -> 29245742.pem in /etc/ssl/certs
I1217 10:31:42.944871 2962598 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts -> hosts in /etc/test/nested/copy/2924574
I1217 10:31:42.944920 2962598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2924574
I1217 10:31:42.952861 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /etc/ssl/certs/29245742.pem (1708 bytes)
I1217 10:31:42.970783 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/test/nested/copy/2924574/hosts --> /etc/test/nested/copy/2924574/hosts (40 bytes)
I1217 10:31:42.988219 2962598 start.go:296] duration metric: took 164.646335ms for postStartSetup
I1217 10:31:42.988683 2962598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
I1217 10:31:43.007531 2962598 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/config.json ...
I1217 10:31:43.007841 2962598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1217 10:31:43.007897 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 10:31:43.032717 2962598 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
I1217 10:31:43.125996 2962598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1217 10:31:43.131195 2962598 start.go:128] duration metric: took 9.31441827s to createHost
I1217 10:31:43.131210 2962598 start.go:83] releasing machines lock for "functional-232588", held for 9.314533254s
I1217 10:31:43.131284 2962598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232588
I1217 10:31:43.155749 2962598 out.go:179] * Found network options:
I1217 10:31:43.158742 2962598 out.go:179] - HTTP_PROXY=localhost:36263
W1217 10:31:43.161595 2962598 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1217 10:31:43.164541 2962598 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1217 10:31:43.167461 2962598 ssh_runner.go:195] Run: cat /version.json
I1217 10:31:43.167508 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 10:31:43.167547 2962598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1217 10:31:43.167597 2962598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232588
I1217 10:31:43.186927 2962598 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
I1217 10:31:43.198055 2962598 sshutil.go:56] new ssh client: &{IP:127.0.0.1 Port:35733 SSHKeyPath:/home/jenkins/minikube-integration/22182-2922712/.minikube/machines/functional-232588/id_ed25519 Username:docker}
I1217 10:31:43.280069 2962598 ssh_runner.go:195] Run: systemctl --version
I1217 10:31:43.369873 2962598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1217 10:31:43.374434 2962598 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1217 10:31:43.374494 2962598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1217 10:31:43.400564 2962598 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1217 10:31:43.400577 2962598 start.go:496] detecting cgroup driver to use...
I1217 10:31:43.400608 2962598 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1217 10:31:43.400664 2962598 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1217 10:31:43.415667 2962598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1217 10:31:43.428524 2962598 docker.go:218] disabling cri-docker service (if available) ...
I1217 10:31:43.428575 2962598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1217 10:31:43.446194 2962598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1217 10:31:43.464208 2962598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1217 10:31:43.576947 2962598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1217 10:31:43.692216 2962598 docker.go:234] disabling docker service ...
I1217 10:31:43.692270 2962598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1217 10:31:43.716018 2962598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1217 10:31:43.729492 2962598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1217 10:31:43.847992 2962598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1217 10:31:43.965222 2962598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1217 10:31:43.978232 2962598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1217 10:31:43.993695 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1217 10:31:44.003710 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1217 10:31:44.014438 2962598 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1217 10:31:44.014497 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1217 10:31:44.024409 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1217 10:31:44.033708 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1217 10:31:44.042666 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1217 10:31:44.052046 2962598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1217 10:31:44.060635 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1217 10:31:44.069867 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1217 10:31:44.078580 2962598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1217 10:31:44.087841 2962598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1217 10:31:44.095695 2962598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1217 10:31:44.103761 2962598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 10:31:44.210981 2962598 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1217 10:31:44.337843 2962598 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1217 10:31:44.337903 2962598 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1217 10:31:44.341763 2962598 start.go:564] Will wait 60s for crictl version
I1217 10:31:44.341817 2962598 ssh_runner.go:195] Run: which crictl
I1217 10:31:44.345498 2962598 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1217 10:31:44.376350 2962598 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1217 10:31:44.376443 2962598 ssh_runner.go:195] Run: containerd --version
I1217 10:31:44.397410 2962598 ssh_runner.go:195] Run: containerd --version
I1217 10:31:44.424578 2962598 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
I1217 10:31:44.427566 2962598 cli_runner.go:164] Run: docker network inspect functional-232588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 10:31:44.443543 2962598 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1217 10:31:44.447409 2962598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 10:31:44.456948 2962598 kubeadm.go:884] updating cluster {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1217 10:31:44.457049 2962598 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1217 10:31:44.457117 2962598 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 10:31:44.482983 2962598 containerd.go:627] all images are preloaded for containerd runtime.
I1217 10:31:44.482995 2962598 containerd.go:534] Images already preloaded, skipping extraction
I1217 10:31:44.483051 2962598 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 10:31:44.511600 2962598 containerd.go:627] all images are preloaded for containerd runtime.
I1217 10:31:44.511611 2962598 cache_images.go:86] Images are preloaded, skipping loading
I1217 10:31:44.511618 2962598 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
I1217 10:31:44.511712 2962598 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1217 10:31:44.511775 2962598 ssh_runner.go:195] Run: sudo crictl info
I1217 10:31:44.537024 2962598 cni.go:84] Creating CNI manager for ""
I1217 10:31:44.537034 2962598 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1217 10:31:44.537046 2962598 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1217 10:31:44.537066 2962598 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232588 NodeName:functional-232588 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1217 10:31:44.537172 2962598 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-232588"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-rc.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1217 10:31:44.537239 2962598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
I1217 10:31:44.545112 2962598 binaries.go:51] Found k8s binaries, skipping transfer
I1217 10:31:44.545172 2962598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1217 10:31:44.553167 2962598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
I1217 10:31:44.567465 2962598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
I1217 10:31:44.580630 2962598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
I1217 10:31:44.593068 2962598 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1217 10:31:44.596992 2962598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 10:31:44.606592 2962598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 10:31:44.727401 2962598 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 10:31:44.743637 2962598 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588 for IP: 192.168.49.2
I1217 10:31:44.743648 2962598 certs.go:195] generating shared ca certs ...
I1217 10:31:44.743663 2962598 certs.go:227] acquiring lock for ca certs: {Name:mkdf3d8010df3a9ac65d768850be53b97cbfafe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 10:31:44.743810 2962598 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key
I1217 10:31:44.743866 2962598 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key
I1217 10:31:44.743873 2962598 certs.go:257] generating profile certs ...
I1217 10:31:44.743966 2962598 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key
I1217 10:31:44.743976 2962598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt with IP's: []
I1217 10:31:45.239331 2962598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt ...
I1217 10:31:45.239352 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.crt: {Name:mke8d0aba80eff817d699ddf08fa998e09130a63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 10:31:45.239767 2962598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key ...
I1217 10:31:45.239780 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/client.key: {Name:mk92276b55e08255e34c5bb60ae0a6286e9cc7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 10:31:45.240108 2962598 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0
I1217 10:31:45.240144 2962598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt.a39919a0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1217 10:31:45.432555 2962598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt.a39919a0 ...
I1217 10:31:45.432571 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt.a39919a0: {Name:mk69b41df65ca444e27b1eaeeeb71b80be470429 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 10:31:45.432778 2962598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0 ...
I1217 10:31:45.432787 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0: {Name:mk4bf5862570a3598af337baf828881e83fdf726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 10:31:45.432873 2962598 certs.go:382] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt.a39919a0 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt
I1217 10:31:45.432957 2962598 certs.go:386] copying /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key.a39919a0 -> /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key
I1217 10:31:45.433010 2962598 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key
I1217 10:31:45.433022 2962598 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt with IP's: []
I1217 10:31:45.576205 2962598 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt ...
I1217 10:31:45.576222 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt: {Name:mkb30de1a52b11f1f4c1e2a381dead0eecaeb6ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 10:31:45.576429 2962598 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key ...
I1217 10:31:45.576438 2962598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key: {Name:mkb9459de3f2240513b863aae0f32feb70e79e66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 10:31:45.576655 2962598 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem (1338 bytes)
W1217 10:31:45.576697 2962598 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574_empty.pem, impossibly tiny 0 bytes
I1217 10:31:45.576706 2962598 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca-key.pem (1675 bytes)
I1217 10:31:45.576731 2962598 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/ca.pem (1082 bytes)
I1217 10:31:45.576757 2962598 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/cert.pem (1123 bytes)
I1217 10:31:45.576779 2962598 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/key.pem (1675 bytes)
I1217 10:31:45.576827 2962598 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem (1708 bytes)
I1217 10:31:45.577388 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1217 10:31:45.597721 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1217 10:31:45.616717 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1217 10:31:45.634970 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1217 10:31:45.654423 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1217 10:31:45.672383 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1217 10:31:45.691724 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1217 10:31:45.710414 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/profiles/functional-232588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1217 10:31:45.728226 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1217 10:31:45.746586 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/certs/2924574.pem --> /usr/share/ca-certificates/2924574.pem (1338 bytes)
I1217 10:31:45.764868 2962598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-2922712/.minikube/files/etc/ssl/certs/29245742.pem --> /usr/share/ca-certificates/29245742.pem (1708 bytes)
I1217 10:31:45.783193 2962598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1217 10:31:45.796026 2962598 ssh_runner.go:195] Run: openssl version
I1217 10:31:45.803008 2962598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1217 10:31:45.810596 2962598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1217 10:31:45.818301 2962598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1217 10:31:45.822066 2962598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 10:22 /usr/share/ca-certificates/minikubeCA.pem
I1217 10:31:45.822122 2962598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1217 10:31:45.874917 2962598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1217 10:31:45.884364 2962598 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1217 10:31:45.892610 2962598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2924574.pem
I1217 10:31:45.900194 2962598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2924574.pem /etc/ssl/certs/2924574.pem
I1217 10:31:45.907971 2962598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2924574.pem
I1217 10:31:45.911619 2962598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 10:31 /usr/share/ca-certificates/2924574.pem
I1217 10:31:45.911677 2962598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2924574.pem
I1217 10:31:45.952616 2962598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1217 10:31:45.960252 2962598 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2924574.pem /etc/ssl/certs/51391683.0
I1217 10:31:45.967591 2962598 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/29245742.pem
I1217 10:31:45.975134 2962598 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/29245742.pem /etc/ssl/certs/29245742.pem
I1217 10:31:45.982998 2962598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29245742.pem
I1217 10:31:45.986896 2962598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 10:31 /usr/share/ca-certificates/29245742.pem
I1217 10:31:45.986957 2962598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29245742.pem
I1217 10:31:46.028227 2962598 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1217 10:31:46.035709 2962598 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/29245742.pem /etc/ssl/certs/3ec20f2e.0
I1217 10:31:46.043338 2962598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1217 10:31:46.046957 2962598 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1217 10:31:46.046999 2962598 kubeadm.go:401] StartCluster: {Name:functional-232588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1217 10:31:46.047067 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1217 10:31:46.047134 2962598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1217 10:31:46.072899 2962598 cri.go:89] found id: ""
I1217 10:31:46.072960 2962598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1217 10:31:46.080716 2962598 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1217 10:31:46.088669 2962598 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1217 10:31:46.088725 2962598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1217 10:31:46.096738 2962598 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1217 10:31:46.096759 2962598 kubeadm.go:158] found existing configuration files:
I1217 10:31:46.096826 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1217 10:31:46.104765 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1217 10:31:46.104842 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1217 10:31:46.112659 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1217 10:31:46.120614 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1217 10:31:46.120669 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1217 10:31:46.128110 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1217 10:31:46.136107 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1217 10:31:46.136162 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1217 10:31:46.143905 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1217 10:31:46.151763 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1217 10:31:46.151833 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1217 10:31:46.159373 2962598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1217 10:31:46.293225 2962598 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1217 10:31:46.293650 2962598 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1217 10:31:46.363441 2962598 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1217 10:35:50.364017 2962598 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1217 10:35:50.364050 2962598 kubeadm.go:319]
I1217 10:35:50.364358 2962598 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1217 10:35:50.372588 2962598 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
I1217 10:35:50.372641 2962598 kubeadm.go:319] [preflight] Running pre-flight checks
I1217 10:35:50.372732 2962598 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1217 10:35:50.372789 2962598 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1217 10:35:50.372824 2962598 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1217 10:35:50.372870 2962598 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1217 10:35:50.372922 2962598 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1217 10:35:50.372970 2962598 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1217 10:35:50.373022 2962598 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1217 10:35:50.373069 2962598 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1217 10:35:50.373118 2962598 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1217 10:35:50.373164 2962598 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1217 10:35:50.373217 2962598 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1217 10:35:50.373260 2962598 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1217 10:35:50.373327 2962598 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1217 10:35:50.373447 2962598 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1217 10:35:50.373557 2962598 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1217 10:35:50.373619 2962598 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1217 10:35:50.376819 2962598 out.go:252] - Generating certificates and keys ...
I1217 10:35:50.376914 2962598 kubeadm.go:319] [certs] Using existing ca certificate authority
I1217 10:35:50.376982 2962598 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1217 10:35:50.377075 2962598 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1217 10:35:50.377142 2962598 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1217 10:35:50.377198 2962598 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1217 10:35:50.377258 2962598 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1217 10:35:50.377317 2962598 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1217 10:35:50.377441 2962598 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-232588 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1217 10:35:50.377494 2962598 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1217 10:35:50.377620 2962598 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-232588 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1217 10:35:50.377684 2962598 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1217 10:35:50.377752 2962598 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1217 10:35:50.377794 2962598 kubeadm.go:319] [certs] Generating "sa" key and public key
I1217 10:35:50.377869 2962598 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1217 10:35:50.377930 2962598 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1217 10:35:50.377984 2962598 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1217 10:35:50.378036 2962598 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1217 10:35:50.378115 2962598 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1217 10:35:50.378169 2962598 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1217 10:35:50.378248 2962598 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1217 10:35:50.378326 2962598 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1217 10:35:50.381334 2962598 out.go:252] - Booting up control plane ...
I1217 10:35:50.381432 2962598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1217 10:35:50.381509 2962598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1217 10:35:50.381575 2962598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1217 10:35:50.381691 2962598 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1217 10:35:50.381787 2962598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1217 10:35:50.381914 2962598 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1217 10:35:50.382023 2962598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1217 10:35:50.382063 2962598 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1217 10:35:50.382206 2962598 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1217 10:35:50.382313 2962598 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1217 10:35:50.382382 2962598 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000105065s
I1217 10:35:50.382385 2962598 kubeadm.go:319]
I1217 10:35:50.382440 2962598 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1217 10:35:50.382471 2962598 kubeadm.go:319] - The kubelet is not running
I1217 10:35:50.382590 2962598 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1217 10:35:50.382593 2962598 kubeadm.go:319]
I1217 10:35:50.382696 2962598 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1217 10:35:50.382733 2962598 kubeadm.go:319] - 'systemctl status kubelet'
I1217 10:35:50.382761 2962598 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1217 10:35:50.382784 2962598 kubeadm.go:319]
W1217 10:35:50.382920 2962598 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-232588 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-232588 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000105065s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I1217 10:35:50.383008 2962598 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1217 10:35:50.790986 2962598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1217 10:35:50.804781 2962598 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1217 10:35:50.804833 2962598 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1217 10:35:50.812969 2962598 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1217 10:35:50.812978 2962598 kubeadm.go:158] found existing configuration files:
I1217 10:35:50.813029 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1217 10:35:50.821164 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1217 10:35:50.821221 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1217 10:35:50.829116 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1217 10:35:50.837654 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1217 10:35:50.837712 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1217 10:35:50.846052 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1217 10:35:50.854274 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1217 10:35:50.854337 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1217 10:35:50.862425 2962598 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1217 10:35:50.870630 2962598 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1217 10:35:50.870689 2962598 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1217 10:35:50.878320 2962598 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1217 10:35:50.918242 2962598 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
I1217 10:35:50.918310 2962598 kubeadm.go:319] [preflight] Running pre-flight checks
I1217 10:35:50.994518 2962598 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1217 10:35:50.994583 2962598 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1217 10:35:50.994617 2962598 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1217 10:35:50.994661 2962598 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1217 10:35:50.994719 2962598 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1217 10:35:50.994765 2962598 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1217 10:35:50.994813 2962598 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1217 10:35:50.994894 2962598 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1217 10:35:50.994961 2962598 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1217 10:35:50.995007 2962598 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1217 10:35:50.995069 2962598 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1217 10:35:50.995122 2962598 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1217 10:35:51.065555 2962598 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1217 10:35:51.065659 2962598 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1217 10:35:51.065748 2962598 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1217 10:35:51.072939 2962598 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1217 10:35:51.078279 2962598 out.go:252] - Generating certificates and keys ...
I1217 10:35:51.078360 2962598 kubeadm.go:319] [certs] Using existing ca certificate authority
I1217 10:35:51.078423 2962598 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1217 10:35:51.078508 2962598 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1217 10:35:51.078569 2962598 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1217 10:35:51.078638 2962598 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1217 10:35:51.078691 2962598 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1217 10:35:51.078758 2962598 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1217 10:35:51.078818 2962598 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1217 10:35:51.078892 2962598 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1217 10:35:51.078969 2962598 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1217 10:35:51.079005 2962598 kubeadm.go:319] [certs] Using the existing "sa" key
I1217 10:35:51.079059 2962598 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1217 10:35:51.417731 2962598 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1217 10:35:51.963655 2962598 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1217 10:35:52.437400 2962598 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1217 10:35:52.623651 2962598 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1217 10:35:52.761132 2962598 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1217 10:35:52.761895 2962598 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1217 10:35:52.765413 2962598 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1217 10:35:52.768580 2962598 out.go:252] - Booting up control plane ...
I1217 10:35:52.768684 2962598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1217 10:35:52.768762 2962598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1217 10:35:52.770868 2962598 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1217 10:35:52.791571 2962598 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1217 10:35:52.791672 2962598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1217 10:35:52.799060 2962598 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1217 10:35:52.799388 2962598 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1217 10:35:52.799565 2962598 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1217 10:35:52.940908 2962598 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1217 10:35:52.941043 2962598 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1217 10:39:52.935423 2962598 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001251054s
I1217 10:39:52.935448 2962598 kubeadm.go:319]
I1217 10:39:52.935520 2962598 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1217 10:39:52.935575 2962598 kubeadm.go:319] - The kubelet is not running
I1217 10:39:52.935686 2962598 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1217 10:39:52.935691 2962598 kubeadm.go:319]
I1217 10:39:52.935794 2962598 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1217 10:39:52.935833 2962598 kubeadm.go:319] - 'systemctl status kubelet'
I1217 10:39:52.935864 2962598 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1217 10:39:52.935867 2962598 kubeadm.go:319]
I1217 10:39:52.939982 2962598 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1217 10:39:52.940396 2962598 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1217 10:39:52.940572 2962598 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1217 10:39:52.940861 2962598 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1217 10:39:52.940868 2962598 kubeadm.go:319]
I1217 10:39:52.940948 2962598 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1217 10:39:52.941009 2962598 kubeadm.go:403] duration metric: took 8m6.894014029s to StartCluster
I1217 10:39:52.941055 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1217 10:39:52.941117 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1217 10:39:52.966691 2962598 cri.go:89] found id: ""
I1217 10:39:52.966706 2962598 logs.go:282] 0 containers: []
W1217 10:39:52.966713 2962598 logs.go:284] No container was found matching "kube-apiserver"
I1217 10:39:52.966720 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1217 10:39:52.966782 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1217 10:39:52.991119 2962598 cri.go:89] found id: ""
I1217 10:39:52.991133 2962598 logs.go:282] 0 containers: []
W1217 10:39:52.991140 2962598 logs.go:284] No container was found matching "etcd"
I1217 10:39:52.991145 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1217 10:39:52.991203 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1217 10:39:53.022458 2962598 cri.go:89] found id: ""
I1217 10:39:53.022473 2962598 logs.go:282] 0 containers: []
W1217 10:39:53.022480 2962598 logs.go:284] No container was found matching "coredns"
I1217 10:39:53.022486 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1217 10:39:53.022549 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1217 10:39:53.056616 2962598 cri.go:89] found id: ""
I1217 10:39:53.056631 2962598 logs.go:282] 0 containers: []
W1217 10:39:53.056639 2962598 logs.go:284] No container was found matching "kube-scheduler"
I1217 10:39:53.056644 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1217 10:39:53.056716 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1217 10:39:53.097546 2962598 cri.go:89] found id: ""
I1217 10:39:53.097560 2962598 logs.go:282] 0 containers: []
W1217 10:39:53.097568 2962598 logs.go:284] No container was found matching "kube-proxy"
I1217 10:39:53.097573 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1217 10:39:53.097633 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1217 10:39:53.123414 2962598 cri.go:89] found id: ""
I1217 10:39:53.123436 2962598 logs.go:282] 0 containers: []
W1217 10:39:53.123444 2962598 logs.go:284] No container was found matching "kube-controller-manager"
I1217 10:39:53.123450 2962598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1217 10:39:53.123515 2962598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1217 10:39:53.148817 2962598 cri.go:89] found id: ""
I1217 10:39:53.148832 2962598 logs.go:282] 0 containers: []
W1217 10:39:53.148840 2962598 logs.go:284] No container was found matching "kindnet"
I1217 10:39:53.148855 2962598 logs.go:123] Gathering logs for kubelet ...
I1217 10:39:53.148865 2962598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1217 10:39:53.206472 2962598 logs.go:123] Gathering logs for dmesg ...
I1217 10:39:53.206491 2962598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1217 10:39:53.223677 2962598 logs.go:123] Gathering logs for describe nodes ...
I1217 10:39:53.223697 2962598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1217 10:39:53.291954 2962598 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1217 10:39:53.282996 4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 10:39:53.283561 4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 10:39:53.285255 4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 10:39:53.285948 4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 10:39:53.287682 4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1217 10:39:53.282996 4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 10:39:53.283561 4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 10:39:53.285255 4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 10:39:53.285948 4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 10:39:53.287682 4783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1217 10:39:53.291964 2962598 logs.go:123] Gathering logs for containerd ...
I1217 10:39:53.291975 2962598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1217 10:39:53.330138 2962598 logs.go:123] Gathering logs for container status ...
I1217 10:39:53.330156 2962598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1217 10:39:53.360079 2962598 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001251054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1217 10:39:53.360136 2962598 out.go:285] *
W1217 10:39:53.360194 2962598 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001251054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1217 10:39:53.360205 2962598 out.go:285] *
W1217 10:39:53.362460 2962598 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1217 10:39:53.368847 2962598 out.go:203]
W1217 10:39:53.371684 2962598 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001251054s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1217 10:39:53.371720 2962598 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1217 10:39:53.371746 2962598 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1217 10:39:53.374892 2962598 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271707202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271718304Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271762020Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271776509Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271789547Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271802814Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271811643Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271837103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271855728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.271884847Z" level=info msg="Connect containerd service"
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.273794703Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.274382316Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.291778081Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.291856085Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.291882571Z" level=info msg="Start subscribing containerd event"
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.291930890Z" level=info msg="Start recovering state"
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334179068Z" level=info msg="Start event monitor"
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334377053Z" level=info msg="Start cni network conf syncer for default"
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334443136Z" level=info msg="Start streaming server"
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334507979Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334565315Z" level=info msg="runtime interface starting up..."
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334616768Z" level=info msg="starting plugins..."
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.334676648Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 17 10:31:44 functional-232588 systemd[1]: Started containerd.service - containerd container runtime.
Dec 17 10:31:44 functional-232588 containerd[762]: time="2025-12-17T10:31:44.336482711Z" level=info msg="containerd successfully booted in 0.084940s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1217 10:39:54.352937 4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 10:39:54.353456 4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 10:39:54.355207 4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 10:39:54.355585 4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 10:39:54.357076 4899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[ +26.532481] overlayfs: idmapped layers are currently not supported
[Dec17 09:26] overlayfs: idmapped layers are currently not supported
[Dec17 09:27] overlayfs: idmapped layers are currently not supported
[Dec17 09:29] overlayfs: idmapped layers are currently not supported
[Dec17 09:31] overlayfs: idmapped layers are currently not supported
[Dec17 09:41] overlayfs: idmapped layers are currently not supported
[Dec17 09:43] overlayfs: idmapped layers are currently not supported
[Dec17 09:44] overlayfs: idmapped layers are currently not supported
[ +5.066669] overlayfs: idmapped layers are currently not supported
[ +38.827173] overlayfs: idmapped layers are currently not supported
[Dec17 09:45] overlayfs: idmapped layers are currently not supported
[Dec17 09:46] overlayfs: idmapped layers are currently not supported
[Dec17 09:48] overlayfs: idmapped layers are currently not supported
[ +5.468161] overlayfs: idmapped layers are currently not supported
[Dec17 09:49] overlayfs: idmapped layers are currently not supported
[ +4.263444] overlayfs: idmapped layers are currently not supported
[Dec17 09:50] overlayfs: idmapped layers are currently not supported
[Dec17 10:07] overlayfs: idmapped layers are currently not supported
[Dec17 10:08] overlayfs: idmapped layers are currently not supported
[Dec17 10:10] overlayfs: idmapped layers are currently not supported
[Dec17 10:11] overlayfs: idmapped layers are currently not supported
[Dec17 10:13] overlayfs: idmapped layers are currently not supported
[Dec17 10:15] overlayfs: idmapped layers are currently not supported
[Dec17 10:16] overlayfs: idmapped layers are currently not supported
[Dec17 10:21] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
10:39:54 up 16:22, 0 user, load average: 0.10, 0.46, 1.07
Linux functional-232588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 17 10:39:50 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 17 10:39:51 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 17 10:39:51 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 10:39:51 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 10:39:51 functional-232588 kubelet[4704]: E1217 10:39:51.544857 4704 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 17 10:39:51 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 10:39:51 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 17 10:39:52 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 17 10:39:52 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 10:39:52 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 10:39:52 functional-232588 kubelet[4710]: E1217 10:39:52.299503 4710 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 17 10:39:52 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 10:39:52 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 17 10:39:52 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 17 10:39:52 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 10:39:53 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 10:39:53 functional-232588 kubelet[4738]: E1217 10:39:53.083365 4738 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 17 10:39:53 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 10:39:53 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 17 10:39:53 functional-232588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 17 10:39:53 functional-232588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 10:39:53 functional-232588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 10:39:53 functional-232588 kubelet[4816]: E1217 10:39:53.828903 4816 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 17 10:39:53 functional-232588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 10:39:53 functional-232588 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232588 -n functional-232588: exit status 6 (353.915593ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1217 10:39:54.829278 2968295 status.go:458] kubeconfig endpoint: get endpoint: "functional-232588" does not appear in /home/jenkins/minikube-integration/22182-2922712/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-232588" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (501.29s)