=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1217 00:37:53.305870 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:40:09.433213 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:40:37.147162 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:56.876783 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:56.883278 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:56.894903 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:56.916642 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:56.958151 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:57.039664 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:57.201376 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:57.523172 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:58.165332 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:41:59.447347 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:42:02.010278 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:42:07.132199 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:42:17.374449 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:42:37.855963 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:18.818734 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:44:40.740201 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-416001/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:09.433353 1211243 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/addons-799486/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m21.379086834s)
-- stdout --
* [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22168
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-608344" primary control-plane node in "functional-608344" cluster
* Pulling base image v0.0.48-1765661130-22141 ...
* Found network options:
- HTTP_PROXY=localhost:46313
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:46313 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-608344 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-608344 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000056154s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001108504s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001108504s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-608344
helpers_test.go:244: (dbg) docker inspect functional-608344:
-- stdout --
[
{
"Id": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
"Created": "2025-12-17T00:37:51.919492207Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1250014,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-17T00:37:51.980484436Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
"ResolvConfPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hostname",
"HostsPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/hosts",
"LogPath": "/var/lib/docker/containers/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc/c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc-json.log",
"Name": "/functional-608344",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-608344:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-608344",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "c4b80a2791ee7fd3320fcd2d2228a985d6ec5d2a72773482c209f42184c9e7fc",
"LowerDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55-init/diff:/var/lib/docker/overlay2/8ecc34c2afe406b378e4fda03788c29f2fd1fefd272b6b141256c6ec1cfd7a56/diff",
"MergedDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/merged",
"UpperDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/diff",
"WorkDir": "/var/lib/docker/overlay2/16c7ae34c7a152519390fed8935758e54f52823689571face1b60f208fccda55/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-608344",
"Source": "/var/lib/docker/volumes/functional-608344/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-608344",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-608344",
"name.minikube.sigs.k8s.io": "functional-608344",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "1788902206da3fb958350909e1e2dcd0f09e17b9f21816d43ec2e8077d073078",
"SandboxKey": "/var/run/docker/netns/1788902206da",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33943"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33944"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33947"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33945"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33946"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-608344": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "3a:51:82:0a:0a:95",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "6a1621db788c73a201a78c04c7db848af643af873e51e0d78cabb70e10c349b3",
"EndpointID": "f9099c9f53542a37c0be6d7a2dbeeb4f696c255add5f19fa301181637b785d96",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-608344",
"c4b80a2791ee"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-608344 -n functional-608344: exit status 6 (291.254783ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1217 00:46:08.713211 1255116 status.go:458] kubeconfig endpoint: get endpoint: "functional-608344" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-608344 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-416001 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ image │ functional-416001 image ls │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ ssh │ functional-416001 ssh sudo cat /etc/ssl/certs/12112432.pem │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ image │ functional-416001 image save --daemon kicbase/echo-server:functional-416001 --alsologtostderr │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ ssh │ functional-416001 ssh sudo cat /usr/share/ca-certificates/12112432.pem │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ ssh │ functional-416001 ssh sudo cat /etc/test/nested/copy/1211243/hosts │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ ssh │ functional-416001 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ cp │ functional-416001 cp testdata/cp-test.txt /home/docker/cp-test.txt │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ ssh │ functional-416001 ssh -n functional-416001 sudo cat /home/docker/cp-test.txt │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ cp │ functional-416001 cp functional-416001:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1170430960/001/cp-test.txt │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ image │ functional-416001 image ls --format short --alsologtostderr │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ ssh │ functional-416001 ssh -n functional-416001 sudo cat /home/docker/cp-test.txt │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ image │ functional-416001 image ls --format yaml --alsologtostderr │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ cp │ functional-416001 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ ssh │ functional-416001 ssh pgrep buildkitd │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ │
│ ssh │ functional-416001 ssh -n functional-416001 sudo cat /tmp/does/not/exist/cp-test.txt │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ image │ functional-416001 image build -t localhost/my-image:functional-416001 testdata/build --alsologtostderr │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ image │ functional-416001 image ls --format json --alsologtostderr │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ image │ functional-416001 image ls --format table --alsologtostderr │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ update-context │ functional-416001 update-context --alsologtostderr -v=2 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ update-context │ functional-416001 update-context --alsologtostderr -v=2 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ update-context │ functional-416001 update-context --alsologtostderr -v=2 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ image │ functional-416001 image ls │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ delete │ -p functional-416001 │ functional-416001 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ 17 Dec 25 00:37 UTC │
│ start │ -p functional-608344 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-608344 │ jenkins │ v1.37.0 │ 17 Dec 25 00:37 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/17 00:37:47
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1217 00:37:47.077849 1249620 out.go:360] Setting OutFile to fd 1 ...
I1217 00:37:47.077955 1249620 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:37:47.077958 1249620 out.go:374] Setting ErrFile to fd 2...
I1217 00:37:47.077962 1249620 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:37:47.078209 1249620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-1208015/.minikube/bin
I1217 00:37:47.078611 1249620 out.go:368] Setting JSON to false
I1217 00:37:47.079403 1249620 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":22817,"bootTime":1765909050,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1217 00:37:47.079462 1249620 start.go:143] virtualization:
I1217 00:37:47.083874 1249620 out.go:179] * [functional-608344] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1217 00:37:47.088923 1249620 out.go:179] - MINIKUBE_LOCATION=22168
I1217 00:37:47.089023 1249620 notify.go:221] Checking for updates...
I1217 00:37:47.093049 1249620 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1217 00:37:47.096528 1249620 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22168-1208015/kubeconfig
I1217 00:37:47.100194 1249620 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-1208015/.minikube
I1217 00:37:47.103536 1249620 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1217 00:37:47.106797 1249620 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1217 00:37:47.110131 1249620 driver.go:422] Setting default libvirt URI to qemu:///system
I1217 00:37:47.131149 1249620 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1217 00:37:47.131268 1249620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 00:37:47.195041 1249620 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 00:37:47.185856639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 00:37:47.195134 1249620 docker.go:319] overlay module found
I1217 00:37:47.200386 1249620 out.go:179] * Using the docker driver based on user configuration
I1217 00:37:47.203383 1249620 start.go:309] selected driver: docker
I1217 00:37:47.203393 1249620 start.go:927] validating driver "docker" against <nil>
I1217 00:37:47.203405 1249620 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1217 00:37:47.204106 1249620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 00:37:47.262382 1249620 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 00:37:47.253485946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 00:37:47.262538 1249620 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1217 00:37:47.262754 1249620 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1217 00:37:47.265880 1249620 out.go:179] * Using Docker driver with root privileges
I1217 00:37:47.268790 1249620 cni.go:84] Creating CNI manager for ""
I1217 00:37:47.268853 1249620 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1217 00:37:47.268861 1249620 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1217 00:37:47.268932 1249620 start.go:353] cluster config:
{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1217 00:37:47.272212 1249620 out.go:179] * Starting "functional-608344" primary control-plane node in "functional-608344" cluster
I1217 00:37:47.275041 1249620 cache.go:134] Beginning downloading kic base image for docker with containerd
I1217 00:37:47.277990 1249620 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
I1217 00:37:47.280873 1249620 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1217 00:37:47.280909 1249620 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
I1217 00:37:47.280917 1249620 cache.go:65] Caching tarball of preloaded images
I1217 00:37:47.280930 1249620 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
I1217 00:37:47.281017 1249620 preload.go:238] Found /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1217 00:37:47.281026 1249620 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
I1217 00:37:47.281391 1249620 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
I1217 00:37:47.281410 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json: {Name:mk1f8807fb33e420cc0d4f5da5e8ec1f77d72d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:37:47.300587 1249620 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
I1217 00:37:47.300596 1249620 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
I1217 00:37:47.300616 1249620 cache.go:243] Successfully downloaded all kic artifacts
I1217 00:37:47.300638 1249620 start.go:360] acquireMachinesLock for functional-608344: {Name:mk1c6a700a4b5e943531d30119e686d435702165 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1217 00:37:47.300752 1249620 start.go:364] duration metric: took 100.801µs to acquireMachinesLock for "functional-608344"
I1217 00:37:47.300777 1249620 start.go:93] Provisioning new machine with config: &{Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1217 00:37:47.300842 1249620 start.go:125] createHost starting for "" (driver="docker")
I1217 00:37:47.306068 1249620 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1217 00:37:47.306362 1249620 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:46313 to docker env.
I1217 00:37:47.306386 1249620 start.go:159] libmachine.API.Create for "functional-608344" (driver="docker")
I1217 00:37:47.306409 1249620 client.go:173] LocalClient.Create starting
I1217 00:37:47.306477 1249620 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem
I1217 00:37:47.306508 1249620 main.go:143] libmachine: Decoding PEM data...
I1217 00:37:47.306525 1249620 main.go:143] libmachine: Parsing certificate...
I1217 00:37:47.306573 1249620 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem
I1217 00:37:47.306592 1249620 main.go:143] libmachine: Decoding PEM data...
I1217 00:37:47.306603 1249620 main.go:143] libmachine: Parsing certificate...
I1217 00:37:47.306962 1249620 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1217 00:37:47.322361 1249620 cli_runner.go:211] docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1217 00:37:47.322443 1249620 network_create.go:284] running [docker network inspect functional-608344] to gather additional debugging logs...
I1217 00:37:47.322457 1249620 cli_runner.go:164] Run: docker network inspect functional-608344
W1217 00:37:47.337510 1249620 cli_runner.go:211] docker network inspect functional-608344 returned with exit code 1
I1217 00:37:47.337528 1249620 network_create.go:287] error running [docker network inspect functional-608344]: docker network inspect functional-608344: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-608344 not found
I1217 00:37:47.337539 1249620 network_create.go:289] output of [docker network inspect functional-608344]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-608344 not found
** /stderr **
I1217 00:37:47.337634 1249620 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 00:37:47.354391 1249620 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018f3680}
I1217 00:37:47.354423 1249620 network_create.go:124] attempt to create docker network functional-608344 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1217 00:37:47.354483 1249620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-608344 functional-608344
I1217 00:37:47.411696 1249620 network_create.go:108] docker network functional-608344 192.168.49.0/24 created
I1217 00:37:47.411718 1249620 kic.go:121] calculated static IP "192.168.49.2" for the "functional-608344" container
I1217 00:37:47.411806 1249620 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1217 00:37:47.427711 1249620 cli_runner.go:164] Run: docker volume create functional-608344 --label name.minikube.sigs.k8s.io=functional-608344 --label created_by.minikube.sigs.k8s.io=true
I1217 00:37:47.445155 1249620 oci.go:103] Successfully created a docker volume functional-608344
I1217 00:37:47.445233 1249620 cli_runner.go:164] Run: docker run --rm --name functional-608344-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-608344 --entrypoint /usr/bin/test -v functional-608344:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
I1217 00:37:47.948682 1249620 oci.go:107] Successfully prepared a docker volume functional-608344
I1217 00:37:47.948746 1249620 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1217 00:37:47.948755 1249620 kic.go:194] Starting extracting preloaded images to volume ...
I1217 00:37:47.948815 1249620 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-608344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
I1217 00:37:51.849405 1249620 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22168-1208015/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-608344:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (3.900544075s)
I1217 00:37:51.849425 1249620 kic.go:203] duration metric: took 3.90066859s to extract preloaded images to volume ...
W1217 00:37:51.849565 1249620 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1217 00:37:51.849687 1249620 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1217 00:37:51.903066 1249620 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-608344 --name functional-608344 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-608344 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-608344 --network functional-608344 --ip 192.168.49.2 --volume functional-608344:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
I1217 00:37:52.213466 1249620 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Running}}
I1217 00:37:52.235175 1249620 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
I1217 00:37:52.258972 1249620 cli_runner.go:164] Run: docker exec functional-608344 stat /var/lib/dpkg/alternatives/iptables
I1217 00:37:52.308518 1249620 oci.go:144] the created container "functional-608344" has a running status.
I1217 00:37:52.308551 1249620 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa...
I1217 00:37:53.208026 1249620 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1217 00:37:53.226885 1249620 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
I1217 00:37:53.243635 1249620 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1217 00:37:53.243647 1249620 kic_runner.go:114] Args: [docker exec --privileged functional-608344 chown docker:docker /home/docker/.ssh/authorized_keys]
I1217 00:37:53.281954 1249620 cli_runner.go:164] Run: docker container inspect functional-608344 --format={{.State.Status}}
I1217 00:37:53.300084 1249620 machine.go:94] provisionDockerMachine start ...
I1217 00:37:53.300164 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 00:37:53.316642 1249620 main.go:143] libmachine: Using SSH client type: native
I1217 00:37:53.316974 1249620 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33943 <nil> <nil>}
I1217 00:37:53.316981 1249620 main.go:143] libmachine: About to run SSH command:
hostname
I1217 00:37:53.317637 1249620 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47132->127.0.0.1:33943: read: connection reset by peer
I1217 00:37:56.449193 1249620 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
I1217 00:37:56.449207 1249620 ubuntu.go:182] provisioning hostname "functional-608344"
I1217 00:37:56.449269 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 00:37:56.467319 1249620 main.go:143] libmachine: Using SSH client type: native
I1217 00:37:56.467623 1249620 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33943 <nil> <nil>}
I1217 00:37:56.467631 1249620 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-608344 && echo "functional-608344" | sudo tee /etc/hostname
I1217 00:37:56.606336 1249620 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-608344
I1217 00:37:56.606405 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 00:37:56.623349 1249620 main.go:143] libmachine: Using SSH client type: native
I1217 00:37:56.623637 1249620 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33943 <nil> <nil>}
I1217 00:37:56.623652 1249620 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-608344' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-608344/g' /etc/hosts;
else
echo '127.0.1.1 functional-608344' | sudo tee -a /etc/hosts;
fi
fi
I1217 00:37:56.753874 1249620 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1217 00:37:56.753890 1249620 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22168-1208015/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-1208015/.minikube}
I1217 00:37:56.753919 1249620 ubuntu.go:190] setting up certificates
I1217 00:37:56.753931 1249620 provision.go:84] configureAuth start
I1217 00:37:56.753998 1249620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
I1217 00:37:56.770211 1249620 provision.go:143] copyHostCerts
I1217 00:37:56.770264 1249620 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem, removing ...
I1217 00:37:56.770273 1249620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem
I1217 00:37:56.770346 1249620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.pem (1082 bytes)
I1217 00:37:56.770434 1249620 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem, removing ...
I1217 00:37:56.770437 1249620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem
I1217 00:37:56.770461 1249620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/cert.pem (1123 bytes)
I1217 00:37:56.770508 1249620 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem, removing ...
I1217 00:37:56.770511 1249620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem
I1217 00:37:56.770532 1249620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-1208015/.minikube/key.pem (1679 bytes)
I1217 00:37:56.770572 1249620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem org=jenkins.functional-608344 san=[127.0.0.1 192.168.49.2 functional-608344 localhost minikube]
I1217 00:37:56.858168 1249620 provision.go:177] copyRemoteCerts
I1217 00:37:56.858219 1249620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1217 00:37:56.858255 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 00:37:56.875117 1249620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
I1217 00:37:56.969064 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1217 00:37:56.985613 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1217 00:37:57.003111 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1217 00:37:57.022509 1249620 provision.go:87] duration metric: took 268.565373ms to configureAuth
I1217 00:37:57.022527 1249620 ubuntu.go:206] setting minikube options for container-runtime
I1217 00:37:57.022715 1249620 config.go:182] Loaded profile config "functional-608344": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1217 00:37:57.022722 1249620 machine.go:97] duration metric: took 3.722627932s to provisionDockerMachine
I1217 00:37:57.022728 1249620 client.go:176] duration metric: took 9.716314707s to LocalClient.Create
I1217 00:37:57.022750 1249620 start.go:167] duration metric: took 9.716365128s to libmachine.API.Create "functional-608344"
I1217 00:37:57.022764 1249620 start.go:293] postStartSetup for "functional-608344" (driver="docker")
I1217 00:37:57.022773 1249620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1217 00:37:57.022826 1249620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1217 00:37:57.022864 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 00:37:57.040877 1249620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
I1217 00:37:57.137766 1249620 ssh_runner.go:195] Run: cat /etc/os-release
I1217 00:37:57.140898 1249620 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1217 00:37:57.140915 1249620 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1217 00:37:57.140925 1249620 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/addons for local assets ...
I1217 00:37:57.140979 1249620 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-1208015/.minikube/files for local assets ...
I1217 00:37:57.141064 1249620 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem -> 12112432.pem in /etc/ssl/certs
I1217 00:37:57.141148 1249620 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts -> hosts in /etc/test/nested/copy/1211243
I1217 00:37:57.141190 1249620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1211243
I1217 00:37:57.148704 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /etc/ssl/certs/12112432.pem (1708 bytes)
I1217 00:37:57.166326 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/test/nested/copy/1211243/hosts --> /etc/test/nested/copy/1211243/hosts (40 bytes)
I1217 00:37:57.183180 1249620 start.go:296] duration metric: took 160.401967ms for postStartSetup
I1217 00:37:57.183557 1249620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
I1217 00:37:57.201225 1249620 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/config.json ...
I1217 00:37:57.201852 1249620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1217 00:37:57.201900 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 00:37:57.220028 1249620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
I1217 00:37:57.310294 1249620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1217 00:37:57.314936 1249620 start.go:128] duration metric: took 10.014079181s to createHost
I1217 00:37:57.314951 1249620 start.go:83] releasing machines lock for "functional-608344", held for 10.014191174s
I1217 00:37:57.315028 1249620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-608344
I1217 00:37:57.336100 1249620 out.go:179] * Found network options:
I1217 00:37:57.339083 1249620 out.go:179] - HTTP_PROXY=localhost:46313
W1217 00:37:57.342000 1249620 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1217 00:37:57.344855 1249620 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1217 00:37:57.347764 1249620 ssh_runner.go:195] Run: cat /version.json
I1217 00:37:57.347805 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 00:37:57.347818 1249620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1217 00:37:57.347874 1249620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-608344
I1217 00:37:57.367319 1249620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
I1217 00:37:57.368145 1249620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/22168-1208015/.minikube/machines/functional-608344/id_rsa Username:docker}
I1217 00:37:57.457179 1249620 ssh_runner.go:195] Run: systemctl --version
I1217 00:37:57.546450 1249620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1217 00:37:57.551120 1249620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1217 00:37:57.551185 1249620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1217 00:37:57.578033 1249620 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1217 00:37:57.578047 1249620 start.go:496] detecting cgroup driver to use...
I1217 00:37:57.578079 1249620 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1217 00:37:57.578126 1249620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1217 00:37:57.594624 1249620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1217 00:37:57.607799 1249620 docker.go:218] disabling cri-docker service (if available) ...
I1217 00:37:57.607861 1249620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1217 00:37:57.626516 1249620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1217 00:37:57.646515 1249620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1217 00:37:57.762826 1249620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1217 00:37:57.880691 1249620 docker.go:234] disabling docker service ...
I1217 00:37:57.880766 1249620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1217 00:37:57.902308 1249620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1217 00:37:57.916156 1249620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1217 00:37:58.047046 1249620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1217 00:37:58.174365 1249620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1217 00:37:58.187558 1249620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1217 00:37:58.202152 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1217 00:37:58.210580 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1217 00:37:58.219138 1249620 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1217 00:37:58.219194 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1217 00:37:58.227736 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1217 00:37:58.236218 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1217 00:37:58.245396 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1217 00:37:58.253913 1249620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1217 00:37:58.261959 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1217 00:37:58.270266 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1217 00:37:58.279078 1249620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1217 00:37:58.287787 1249620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1217 00:37:58.295133 1249620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1217 00:37:58.302379 1249620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 00:37:58.443740 1249620 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1217 00:37:58.584688 1249620 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1217 00:37:58.584746 1249620 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1217 00:37:58.588390 1249620 start.go:564] Will wait 60s for crictl version
I1217 00:37:58.588441 1249620 ssh_runner.go:195] Run: which crictl
I1217 00:37:58.591767 1249620 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1217 00:37:58.618628 1249620 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1217 00:37:58.618687 1249620 ssh_runner.go:195] Run: containerd --version
I1217 00:37:58.639379 1249620 ssh_runner.go:195] Run: containerd --version
I1217 00:37:58.662959 1249620 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
I1217 00:37:58.665926 1249620 cli_runner.go:164] Run: docker network inspect functional-608344 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 00:37:58.682046 1249620 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1217 00:37:58.685871 1249620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 00:37:58.695416 1249620 kubeadm.go:884] updating cluster {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1217 00:37:58.695549 1249620 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1217 00:37:58.695626 1249620 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 00:37:58.726170 1249620 containerd.go:627] all images are preloaded for containerd runtime.
I1217 00:37:58.726182 1249620 containerd.go:534] Images already preloaded, skipping extraction
I1217 00:37:58.726241 1249620 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 00:37:58.752080 1249620 containerd.go:627] all images are preloaded for containerd runtime.
I1217 00:37:58.752092 1249620 cache_images.go:86] Images are preloaded, skipping loading
I1217 00:37:58.752098 1249620 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1217 00:37:58.752195 1249620 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-608344 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1217 00:37:58.752260 1249620 ssh_runner.go:195] Run: sudo crictl info
I1217 00:37:58.782017 1249620 cni.go:84] Creating CNI manager for ""
I1217 00:37:58.782028 1249620 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1217 00:37:58.782047 1249620 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1217 00:37:58.782068 1249620 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-608344 NodeName:functional-608344 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1217 00:37:58.782183 1249620 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-608344"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1217 00:37:58.782249 1249620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1217 00:37:58.790084 1249620 binaries.go:51] Found k8s binaries, skipping transfer
I1217 00:37:58.790144 1249620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1217 00:37:58.798586 1249620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1217 00:37:58.811228 1249620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1217 00:37:58.824689 1249620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1217 00:37:58.839221 1249620 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1217 00:37:58.842780 1249620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 00:37:58.852390 1249620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 00:37:58.975284 1249620 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 00:37:58.992908 1249620 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344 for IP: 192.168.49.2
I1217 00:37:58.992919 1249620 certs.go:195] generating shared ca certs ...
I1217 00:37:58.992933 1249620 certs.go:227] acquiring lock for ca certs: {Name:mk048272a80e93c676a3d23a466ea54e7270e11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:37:58.993079 1249620 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key
I1217 00:37:58.993124 1249620 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key
I1217 00:37:58.993131 1249620 certs.go:257] generating profile certs ...
I1217 00:37:58.993188 1249620 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key
I1217 00:37:58.993197 1249620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt with IP's: []
I1217 00:37:59.036718 1249620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt ...
I1217 00:37:59.036734 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.crt: {Name:mke1055b1743c3fc8eb6e33c072f1d335124c556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:37:59.036963 1249620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key ...
I1217 00:37:59.036970 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/client.key: {Name:mk461e2e0eab3edf14ca28ae6602a298e3e17f65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:37:59.037073 1249620 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443
I1217 00:37:59.037084 1249620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt.29ae8443 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1217 00:37:59.167465 1249620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt.29ae8443 ...
I1217 00:37:59.167483 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt.29ae8443: {Name:mk8dc0d6c0daab9347068427e8209973d836c8c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:37:59.167673 1249620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443 ...
I1217 00:37:59.167681 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443: {Name:mkc3d620dd9591dfedaabfc3021cd8140ed4a374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:37:59.167762 1249620 certs.go:382] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt.29ae8443 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt
I1217 00:37:59.167834 1249620 certs.go:386] copying /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key.29ae8443 -> /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key
I1217 00:37:59.167883 1249620 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key
I1217 00:37:59.167896 1249620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt with IP's: []
I1217 00:37:59.557517 1249620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt ...
I1217 00:37:59.557533 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt: {Name:mkc09dd685c225df59597749144f64a4b663565b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:37:59.557741 1249620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key ...
I1217 00:37:59.557749 1249620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key: {Name:mk1a97bc531dde4d6b10d1362faa778722096d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:37:59.557950 1249620 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem (1338 bytes)
W1217 00:37:59.557991 1249620 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243_empty.pem, impossibly tiny 0 bytes
I1217 00:37:59.558001 1249620 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca-key.pem (1675 bytes)
I1217 00:37:59.558027 1249620 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/ca.pem (1082 bytes)
I1217 00:37:59.558051 1249620 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/cert.pem (1123 bytes)
I1217 00:37:59.558073 1249620 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/key.pem (1679 bytes)
I1217 00:37:59.558115 1249620 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem (1708 bytes)
I1217 00:37:59.558718 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1217 00:37:59.576791 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1217 00:37:59.597061 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1217 00:37:59.615468 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1217 00:37:59.633601 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1217 00:37:59.651138 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1217 00:37:59.669128 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1217 00:37:59.686690 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/profiles/functional-608344/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1217 00:37:59.704426 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/files/etc/ssl/certs/12112432.pem --> /usr/share/ca-certificates/12112432.pem (1708 bytes)
I1217 00:37:59.723065 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1217 00:37:59.741082 1249620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-1208015/.minikube/certs/1211243.pem --> /usr/share/ca-certificates/1211243.pem (1338 bytes)
I1217 00:37:59.759073 1249620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1217 00:37:59.771607 1249620 ssh_runner.go:195] Run: openssl version
I1217 00:37:59.777781 1249620 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12112432.pem
I1217 00:37:59.785025 1249620 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12112432.pem /etc/ssl/certs/12112432.pem
I1217 00:37:59.793092 1249620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12112432.pem
I1217 00:37:59.796760 1249620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:37 /usr/share/ca-certificates/12112432.pem
I1217 00:37:59.796815 1249620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12112432.pem
I1217 00:37:59.837817 1249620 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1217 00:37:59.845315 1249620 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12112432.pem /etc/ssl/certs/3ec20f2e.0
I1217 00:37:59.852901 1249620 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1217 00:37:59.860449 1249620 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1217 00:37:59.869748 1249620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1217 00:37:59.874373 1249620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:27 /usr/share/ca-certificates/minikubeCA.pem
I1217 00:37:59.874439 1249620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1217 00:37:59.915911 1249620 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1217 00:37:59.926380 1249620 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1217 00:37:59.934031 1249620 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1211243.pem
I1217 00:37:59.941218 1249620 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1211243.pem /etc/ssl/certs/1211243.pem
I1217 00:37:59.948698 1249620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1211243.pem
I1217 00:37:59.952356 1249620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:37 /usr/share/ca-certificates/1211243.pem
I1217 00:37:59.952416 1249620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1211243.pem
I1217 00:37:59.993749 1249620 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1217 00:38:00.006389 1249620 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1211243.pem /etc/ssl/certs/51391683.0
I1217 00:38:00.056235 1249620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1217 00:38:00.068542 1249620 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1217 00:38:00.068590 1249620 kubeadm.go:401] StartCluster: {Name:functional-608344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-608344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1217 00:38:00.068662 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1217 00:38:00.068728 1249620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1217 00:38:00.239824 1249620 cri.go:89] found id: ""
I1217 00:38:00.239902 1249620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1217 00:38:00.262961 1249620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1217 00:38:00.281451 1249620 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1217 00:38:00.281523 1249620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1217 00:38:00.300340 1249620 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1217 00:38:00.300352 1249620 kubeadm.go:158] found existing configuration files:
I1217 00:38:00.300408 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1217 00:38:00.323462 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1217 00:38:00.323527 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1217 00:38:00.334356 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1217 00:38:00.344459 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1217 00:38:00.344526 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1217 00:38:00.354787 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1217 00:38:00.364585 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1217 00:38:00.364650 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1217 00:38:00.373910 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1217 00:38:00.387470 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1217 00:38:00.387536 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1217 00:38:00.397119 1249620 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1217 00:38:00.514797 1249620 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1217 00:38:00.515278 1249620 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1217 00:38:00.598806 1249620 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1217 00:42:04.803749 1249620 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1217 00:42:04.803773 1249620 kubeadm.go:319]
I1217 00:42:04.803890 1249620 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1217 00:42:04.809840 1249620 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1217 00:42:04.809899 1249620 kubeadm.go:319] [preflight] Running pre-flight checks
I1217 00:42:04.809998 1249620 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1217 00:42:04.810058 1249620 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1217 00:42:04.810098 1249620 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1217 00:42:04.810145 1249620 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1217 00:42:04.810197 1249620 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1217 00:42:04.810250 1249620 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1217 00:42:04.810307 1249620 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1217 00:42:04.810358 1249620 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1217 00:42:04.810412 1249620 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1217 00:42:04.810468 1249620 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1217 00:42:04.810540 1249620 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1217 00:42:04.810604 1249620 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1217 00:42:04.810676 1249620 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1217 00:42:04.810773 1249620 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1217 00:42:04.810862 1249620 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1217 00:42:04.810922 1249620 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1217 00:42:04.812829 1249620 out.go:252] - Generating certificates and keys ...
I1217 00:42:04.812911 1249620 kubeadm.go:319] [certs] Using existing ca certificate authority
I1217 00:42:04.812978 1249620 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1217 00:42:04.813044 1249620 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1217 00:42:04.813099 1249620 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1217 00:42:04.813158 1249620 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1217 00:42:04.813206 1249620 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1217 00:42:04.813258 1249620 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1217 00:42:04.813377 1249620 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-608344 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1217 00:42:04.813428 1249620 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1217 00:42:04.813545 1249620 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-608344 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1217 00:42:04.813609 1249620 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1217 00:42:04.813694 1249620 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1217 00:42:04.813737 1249620 kubeadm.go:319] [certs] Generating "sa" key and public key
I1217 00:42:04.813791 1249620 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1217 00:42:04.813841 1249620 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1217 00:42:04.813896 1249620 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1217 00:42:04.813950 1249620 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1217 00:42:04.814011 1249620 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1217 00:42:04.814064 1249620 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1217 00:42:04.814143 1249620 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1217 00:42:04.814207 1249620 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1217 00:42:04.819008 1249620 out.go:252] - Booting up control plane ...
I1217 00:42:04.819129 1249620 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1217 00:42:04.819221 1249620 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1217 00:42:04.819294 1249620 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1217 00:42:04.819396 1249620 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1217 00:42:04.819491 1249620 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1217 00:42:04.819595 1249620 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1217 00:42:04.819684 1249620 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1217 00:42:04.819723 1249620 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1217 00:42:04.819857 1249620 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1217 00:42:04.819983 1249620 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1217 00:42:04.820051 1249620 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000056154s
I1217 00:42:04.820054 1249620 kubeadm.go:319]
I1217 00:42:04.820109 1249620 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1217 00:42:04.820140 1249620 kubeadm.go:319] - The kubelet is not running
I1217 00:42:04.820258 1249620 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1217 00:42:04.820265 1249620 kubeadm.go:319]
I1217 00:42:04.820368 1249620 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1217 00:42:04.820400 1249620 kubeadm.go:319] - 'systemctl status kubelet'
I1217 00:42:04.820442 1249620 kubeadm.go:319] - 'journalctl -xeu kubelet'
W1217 00:42:04.820572 1249620 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-608344 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-608344 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000056154s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I1217 00:42:04.820670 1249620 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1217 00:42:04.820814 1249620 kubeadm.go:319]
I1217 00:42:05.230104 1249620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1217 00:42:05.243518 1249620 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1217 00:42:05.243570 1249620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1217 00:42:05.251369 1249620 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1217 00:42:05.251377 1249620 kubeadm.go:158] found existing configuration files:
I1217 00:42:05.251426 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1217 00:42:05.259153 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1217 00:42:05.259207 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1217 00:42:05.266346 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1217 00:42:05.273811 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1217 00:42:05.273869 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1217 00:42:05.281258 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1217 00:42:05.288736 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1217 00:42:05.288792 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1217 00:42:05.296179 1249620 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1217 00:42:05.303544 1249620 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1217 00:42:05.303599 1249620 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1217 00:42:05.310896 1249620 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1217 00:42:05.347419 1249620 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1217 00:42:05.347466 1249620 kubeadm.go:319] [preflight] Running pre-flight checks
I1217 00:42:05.416369 1249620 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1217 00:42:05.416458 1249620 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1217 00:42:05.416492 1249620 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1217 00:42:05.416536 1249620 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1217 00:42:05.416582 1249620 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1217 00:42:05.416628 1249620 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1217 00:42:05.416675 1249620 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1217 00:42:05.416721 1249620 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1217 00:42:05.416768 1249620 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1217 00:42:05.416812 1249620 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1217 00:42:05.416858 1249620 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1217 00:42:05.416907 1249620 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1217 00:42:05.486343 1249620 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1217 00:42:05.486439 1249620 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1217 00:42:05.486528 1249620 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1217 00:42:05.492558 1249620 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1217 00:42:05.496200 1249620 out.go:252] - Generating certificates and keys ...
I1217 00:42:05.496287 1249620 kubeadm.go:319] [certs] Using existing ca certificate authority
I1217 00:42:05.496355 1249620 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1217 00:42:05.496453 1249620 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1217 00:42:05.496518 1249620 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1217 00:42:05.496591 1249620 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1217 00:42:05.496648 1249620 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1217 00:42:05.496715 1249620 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1217 00:42:05.496780 1249620 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1217 00:42:05.496902 1249620 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1217 00:42:05.496980 1249620 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1217 00:42:05.497270 1249620 kubeadm.go:319] [certs] Using the existing "sa" key
I1217 00:42:05.497325 1249620 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1217 00:42:05.785525 1249620 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1217 00:42:06.683665 1249620 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1217 00:42:07.376722 1249620 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1217 00:42:07.546496 1249620 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1217 00:42:07.778465 1249620 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1217 00:42:07.779108 1249620 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1217 00:42:07.781678 1249620 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1217 00:42:07.784866 1249620 out.go:252] - Booting up control plane ...
I1217 00:42:07.784967 1249620 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1217 00:42:07.785044 1249620 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1217 00:42:07.785114 1249620 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1217 00:42:07.805276 1249620 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1217 00:42:07.805373 1249620 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1217 00:42:07.816752 1249620 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1217 00:42:07.817521 1249620 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1217 00:42:07.817852 1249620 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1217 00:42:07.970505 1249620 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1217 00:42:07.970612 1249620 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1217 00:46:07.971562 1249620 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001108504s
I1217 00:46:07.971586 1249620 kubeadm.go:319]
I1217 00:46:07.971690 1249620 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1217 00:46:07.971746 1249620 kubeadm.go:319] - The kubelet is not running
I1217 00:46:07.972080 1249620 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1217 00:46:07.972086 1249620 kubeadm.go:319]
I1217 00:46:07.972275 1249620 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1217 00:46:07.972565 1249620 kubeadm.go:319] - 'systemctl status kubelet'
I1217 00:46:07.972632 1249620 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1217 00:46:07.972636 1249620 kubeadm.go:319]
I1217 00:46:07.977426 1249620 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1217 00:46:07.977935 1249620 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1217 00:46:07.978057 1249620 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1217 00:46:07.978313 1249620 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1217 00:46:07.978322 1249620 kubeadm.go:319]
I1217 00:46:07.978405 1249620 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1217 00:46:07.978459 1249620 kubeadm.go:403] duration metric: took 8m7.909874388s to StartCluster
I1217 00:46:07.978499 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1217 00:46:07.978557 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1217 00:46:08.008871 1249620 cri.go:89] found id: ""
I1217 00:46:08.008898 1249620 logs.go:282] 0 containers: []
W1217 00:46:08.008911 1249620 logs.go:284] No container was found matching "kube-apiserver"
I1217 00:46:08.008917 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1217 00:46:08.008998 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1217 00:46:08.034772 1249620 cri.go:89] found id: ""
I1217 00:46:08.034786 1249620 logs.go:282] 0 containers: []
W1217 00:46:08.034793 1249620 logs.go:284] No container was found matching "etcd"
I1217 00:46:08.034801 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1217 00:46:08.034868 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1217 00:46:08.061353 1249620 cri.go:89] found id: ""
I1217 00:46:08.061367 1249620 logs.go:282] 0 containers: []
W1217 00:46:08.061374 1249620 logs.go:284] No container was found matching "coredns"
I1217 00:46:08.061379 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1217 00:46:08.061440 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1217 00:46:08.087277 1249620 cri.go:89] found id: ""
I1217 00:46:08.087291 1249620 logs.go:282] 0 containers: []
W1217 00:46:08.087299 1249620 logs.go:284] No container was found matching "kube-scheduler"
I1217 00:46:08.087304 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1217 00:46:08.087364 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1217 00:46:08.116240 1249620 cri.go:89] found id: ""
I1217 00:46:08.116254 1249620 logs.go:282] 0 containers: []
W1217 00:46:08.116262 1249620 logs.go:284] No container was found matching "kube-proxy"
I1217 00:46:08.116267 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1217 00:46:08.116324 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1217 00:46:08.148758 1249620 cri.go:89] found id: ""
I1217 00:46:08.148772 1249620 logs.go:282] 0 containers: []
W1217 00:46:08.148779 1249620 logs.go:284] No container was found matching "kube-controller-manager"
I1217 00:46:08.148785 1249620 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1217 00:46:08.148846 1249620 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1217 00:46:08.176772 1249620 cri.go:89] found id: ""
I1217 00:46:08.176785 1249620 logs.go:282] 0 containers: []
W1217 00:46:08.176792 1249620 logs.go:284] No container was found matching "kindnet"
I1217 00:46:08.176800 1249620 logs.go:123] Gathering logs for container status ...
I1217 00:46:08.176811 1249620 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1217 00:46:08.203752 1249620 logs.go:123] Gathering logs for kubelet ...
I1217 00:46:08.203768 1249620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1217 00:46:08.259944 1249620 logs.go:123] Gathering logs for dmesg ...
I1217 00:46:08.259962 1249620 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1217 00:46:08.274608 1249620 logs.go:123] Gathering logs for describe nodes ...
I1217 00:46:08.274624 1249620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1217 00:46:08.345527 1249620 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1217 00:46:08.332278 4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 00:46:08.332809 4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 00:46:08.334525 4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 00:46:08.339713 4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 00:46:08.341262 4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1217 00:46:08.332278 4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 00:46:08.332809 4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 00:46:08.334525 4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 00:46:08.339713 4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 00:46:08.341262 4807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1217 00:46:08.345538 1249620 logs.go:123] Gathering logs for containerd ...
I1217 00:46:08.345549 1249620 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
W1217 00:46:08.383462 1249620 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001108504s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1217 00:46:08.383508 1249620 out.go:285] *
W1217 00:46:08.383620 1249620 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001108504s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1217 00:46:08.383693 1249620 out.go:285] *
W1217 00:46:08.385855 1249620 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1217 00:46:08.391282 1249620 out.go:203]
W1217 00:46:08.394806 1249620 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001108504s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1217 00:46:08.394843 1249620 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1217 00:46:08.394864 1249620 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1217 00:46:08.398711 1249620 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522436142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522515290Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522617913Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522688379Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522755071Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522862092Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522927668Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.522998463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.523074936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.523156602Z" level=info msg="Connect containerd service"
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.523516648Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.524216882Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.538375053Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.538622506Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.538550875Z" level=info msg="Start subscribing containerd event"
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.544796580Z" level=info msg="Start recovering state"
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.581522148Z" level=info msg="Start event monitor"
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.581757302Z" level=info msg="Start cni network conf syncer for default"
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.581827727Z" level=info msg="Start streaming server"
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.581904036Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.581963195Z" level=info msg="runtime interface starting up..."
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.582025695Z" level=info msg="starting plugins..."
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.582100231Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 17 00:37:58 functional-608344 systemd[1]: Started containerd.service - containerd container runtime.
Dec 17 00:37:58 functional-608344 containerd[764]: time="2025-12-17T00:37:58.584443956Z" level=info msg="containerd successfully booted in 0.088658s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1217 00:46:09.349341 4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 00:46:09.349961 4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 00:46:09.351547 4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 00:46:09.351955 4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 00:46:09.353476 4913 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec17 00:26] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
00:46:09 up 6:28, 0 user, load average: 0.41, 0.60, 1.26
Linux functional-608344 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 17 00:46:05 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 17 00:46:06 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 17 00:46:06 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 00:46:06 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 00:46:06 functional-608344 kubelet[4715]: E1217 00:46:06.669538 4715 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 17 00:46:06 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 00:46:06 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 17 00:46:07 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 17 00:46:07 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 00:46:07 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 00:46:07 functional-608344 kubelet[4721]: E1217 00:46:07.412323 4721 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 17 00:46:07 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 00:46:07 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 17 00:46:08 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 17 00:46:08 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 00:46:08 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 00:46:08 functional-608344 kubelet[4768]: E1217 00:46:08.187734 4768 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 17 00:46:08 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 00:46:08 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 17 00:46:08 functional-608344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 17 00:46:08 functional-608344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 00:46:08 functional-608344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 00:46:08 functional-608344 kubelet[4829]: E1217 00:46:08.947752 4829 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 17 00:46:08 functional-608344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 00:46:08 functional-608344 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-608344 -n functional-608344: exit status 6 (314.175346ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1217 00:46:09.779258 1255328 status.go:458] kubeconfig endpoint: get endpoint: "functional-608344" does not appear in /home/jenkins/minikube-integration/22168-1208015/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-608344" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.76s)