=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-682596 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1217 20:17:32.880389 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/addons-060437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:19:49.015139 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/addons-060437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:20:16.729066 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/addons-060437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:28.508393 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:28.514804 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:28.526236 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:28.547710 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:28.589192 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:28.670741 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:28.832348 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:29.154230 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:29.796354 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:31.077916 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:33.639408 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:38.760771 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:21:49.002062 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:22:09.484323 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:22:50.445847 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:12.367431 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-032730/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:24:49.015195 369461 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/addons-060437/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-682596 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m20.8787724s)
-- stdout --
* [functional-682596] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=21808
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21808-367595/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-367595/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-682596" primary control-plane node in "functional-682596" cluster
* Pulling base image v0.0.48-1765661130-22141 ...
* Found network options:
- HTTP_PROXY=localhost:34181
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:34181 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-682596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-682596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000237436s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000252088s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000252088s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-682596 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-682596
helpers_test.go:244: (dbg) docker inspect functional-682596:
-- stdout --
[
{
"Id": "efc9468a7e551914f92ba48a75f43698d6d0bf3671e8866cdeeebc5a6393be77",
"Created": "2025-12-17T20:17:26.774929696Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 408854,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-17T20:17:26.844564666Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:2a6398fc76fc21dc0a77ac54600c2604c101bff52e66ecf65f88ec0f1a8cff2d",
"ResolvConfPath": "/var/lib/docker/containers/efc9468a7e551914f92ba48a75f43698d6d0bf3671e8866cdeeebc5a6393be77/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/efc9468a7e551914f92ba48a75f43698d6d0bf3671e8866cdeeebc5a6393be77/hostname",
"HostsPath": "/var/lib/docker/containers/efc9468a7e551914f92ba48a75f43698d6d0bf3671e8866cdeeebc5a6393be77/hosts",
"LogPath": "/var/lib/docker/containers/efc9468a7e551914f92ba48a75f43698d6d0bf3671e8866cdeeebc5a6393be77/efc9468a7e551914f92ba48a75f43698d6d0bf3671e8866cdeeebc5a6393be77-json.log",
"Name": "/functional-682596",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-682596:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-682596",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "efc9468a7e551914f92ba48a75f43698d6d0bf3671e8866cdeeebc5a6393be77",
"LowerDir": "/var/lib/docker/overlay2/87cdd73f63f42def67677e3949be0b1e0c0455a2f4f85554084b51511cf6b268-init/diff:/var/lib/docker/overlay2/83c8e6311894730d80a5439b5d4991744e9cfa6d0015df9caca346d57baf92e8/diff",
"MergedDir": "/var/lib/docker/overlay2/87cdd73f63f42def67677e3949be0b1e0c0455a2f4f85554084b51511cf6b268/merged",
"UpperDir": "/var/lib/docker/overlay2/87cdd73f63f42def67677e3949be0b1e0c0455a2f4f85554084b51511cf6b268/diff",
"WorkDir": "/var/lib/docker/overlay2/87cdd73f63f42def67677e3949be0b1e0c0455a2f4f85554084b51511cf6b268/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-682596",
"Source": "/var/lib/docker/volumes/functional-682596/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-682596",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-682596",
"name.minikube.sigs.k8s.io": "functional-682596",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "8e0f8d4915f888f90df7adb000bd0e749885d304e33053e85751193487b627b9",
"SandboxKey": "/var/run/docker/netns/8e0f8d4915f8",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33163"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33164"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33167"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33165"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33166"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-682596": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "de:95:c1:d9:d4:32",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "9e66e4dbc8284f728f81715f37c51d8272e96fcac9fb378874c982b3077b6cc2",
"EndpointID": "0db3c56cfb2be75c981ed53adcc07de7cd33db60d51c01b0e875c8d41cf02897",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-682596",
"efc9468a7e55"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-682596 -n functional-682596
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-682596 -n functional-682596: exit status 6 (308.016387ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1217 20:25:42.878543 414001 status.go:458] kubeconfig endpoint: get endpoint: "functional-682596" does not appear in /home/jenkins/minikube-integration/21808-367595/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-682596 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ image │ functional-032730 image load --daemon kicbase/echo-server:functional-032730 --alsologtostderr │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ ssh │ functional-032730 ssh sudo cat /usr/share/ca-certificates/3694612.pem │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ ssh │ functional-032730 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image ls │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ ssh │ functional-032730 ssh sudo cat /etc/test/nested/copy/369461/hosts │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image load --daemon kicbase/echo-server:functional-032730 --alsologtostderr │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image ls │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image save kicbase/echo-server:functional-032730 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ update-context │ functional-032730 update-context --alsologtostderr -v=2 │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ update-context │ functional-032730 update-context --alsologtostderr -v=2 │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image rm kicbase/echo-server:functional-032730 --alsologtostderr │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ update-context │ functional-032730 update-context --alsologtostderr -v=2 │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image ls │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image ls │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image save --daemon kicbase/echo-server:functional-032730 --alsologtostderr │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image ls --format yaml --alsologtostderr │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image ls --format short --alsologtostderr │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image ls --format json --alsologtostderr │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image ls --format table --alsologtostderr │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ ssh │ functional-032730 ssh pgrep buildkitd │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ │
│ image │ functional-032730 image build -t localhost/my-image:functional-032730 testdata/build --alsologtostderr │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ image │ functional-032730 image ls │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ delete │ -p functional-032730 │ functional-032730 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ 17 Dec 25 20:17 UTC │
│ start │ -p functional-682596 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-682596 │ jenkins │ v1.37.0 │ 17 Dec 25 20:17 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/17 20:17:21
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1217 20:17:21.726426 408464 out.go:360] Setting OutFile to fd 1 ...
I1217 20:17:21.726544 408464 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:17:21.726548 408464 out.go:374] Setting ErrFile to fd 2...
I1217 20:17:21.726552 408464 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 20:17:21.726806 408464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-367595/.minikube/bin
I1217 20:17:21.727217 408464 out.go:368] Setting JSON to false
I1217 20:17:21.728018 408464 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10787,"bootTime":1765991855,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1217 20:17:21.728075 408464 start.go:143] virtualization:
I1217 20:17:21.732855 408464 out.go:179] * [functional-682596] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1217 20:17:21.736651 408464 out.go:179] - MINIKUBE_LOCATION=21808
I1217 20:17:21.736757 408464 notify.go:221] Checking for updates...
I1217 20:17:21.744130 408464 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1217 20:17:21.747648 408464 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21808-367595/kubeconfig
I1217 20:17:21.751390 408464 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-367595/.minikube
I1217 20:17:21.754752 408464 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1217 20:17:21.757946 408464 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1217 20:17:21.761443 408464 driver.go:422] Setting default libvirt URI to qemu:///system
I1217 20:17:21.789893 408464 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1217 20:17:21.790020 408464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 20:17:21.849589 408464 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 20:17:21.840363338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 20:17:21.849692 408464 docker.go:319] overlay module found
I1217 20:17:21.853030 408464 out.go:179] * Using the docker driver based on user configuration
I1217 20:17:21.856115 408464 start.go:309] selected driver: docker
I1217 20:17:21.856125 408464 start.go:927] validating driver "docker" against <nil>
I1217 20:17:21.856137 408464 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1217 20:17:21.856902 408464 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1217 20:17:21.910849 408464 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-17 20:17:21.90199796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1217 20:17:21.911005 408464 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1217 20:17:21.911226 408464 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1217 20:17:21.914292 408464 out.go:179] * Using Docker driver with root privileges
I1217 20:17:21.917410 408464 cni.go:84] Creating CNI manager for ""
I1217 20:17:21.917472 408464 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1217 20:17:21.917479 408464 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1217 20:17:21.917557 408464 start.go:353] cluster config:
{Name:functional-682596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-682596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1217 20:17:21.920752 408464 out.go:179] * Starting "functional-682596" primary control-plane node in "functional-682596" cluster
I1217 20:17:21.923735 408464 cache.go:134] Beginning downloading kic base image for docker with containerd
I1217 20:17:21.926625 408464 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
I1217 20:17:21.929425 408464 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1217 20:17:21.929482 408464 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-367595/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
I1217 20:17:21.929489 408464 cache.go:65] Caching tarball of preloaded images
I1217 20:17:21.929501 408464 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
I1217 20:17:21.929577 408464 preload.go:238] Found /home/jenkins/minikube-integration/21808-367595/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1217 20:17:21.929593 408464 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
I1217 20:17:21.929917 408464 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/config.json ...
I1217 20:17:21.929939 408464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/config.json: {Name:mk7e667c03cec200e74dbcb9647a4a92f028de4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 20:17:21.948528 408464 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
I1217 20:17:21.948538 408464 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
I1217 20:17:21.948556 408464 cache.go:243] Successfully downloaded all kic artifacts
I1217 20:17:21.948585 408464 start.go:360] acquireMachinesLock for functional-682596: {Name:mk49b95a4c72eb2d15a1ae0f35918a9843d0b3df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1217 20:17:21.948695 408464 start.go:364] duration metric: took 95.451µs to acquireMachinesLock for "functional-682596"
I1217 20:17:21.948720 408464 start.go:93] Provisioning new machine with config: &{Name:functional-682596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-682596 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1217 20:17:21.948784 408464 start.go:125] createHost starting for "" (driver="docker")
I1217 20:17:21.952131 408464 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1217 20:17:21.952423 408464 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:34181 to docker env.
I1217 20:17:21.952448 408464 start.go:159] libmachine.API.Create for "functional-682596" (driver="docker")
I1217 20:17:21.952469 408464 client.go:173] LocalClient.Create starting
I1217 20:17:21.952533 408464 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-367595/.minikube/certs/ca.pem
I1217 20:17:21.952563 408464 main.go:143] libmachine: Decoding PEM data...
I1217 20:17:21.952577 408464 main.go:143] libmachine: Parsing certificate...
I1217 20:17:21.952631 408464 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-367595/.minikube/certs/cert.pem
I1217 20:17:21.952646 408464 main.go:143] libmachine: Decoding PEM data...
I1217 20:17:21.952659 408464 main.go:143] libmachine: Parsing certificate...
I1217 20:17:21.953002 408464 cli_runner.go:164] Run: docker network inspect functional-682596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1217 20:17:21.968635 408464 cli_runner.go:211] docker network inspect functional-682596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1217 20:17:21.968715 408464 network_create.go:284] running [docker network inspect functional-682596] to gather additional debugging logs...
I1217 20:17:21.968730 408464 cli_runner.go:164] Run: docker network inspect functional-682596
W1217 20:17:21.985090 408464 cli_runner.go:211] docker network inspect functional-682596 returned with exit code 1
I1217 20:17:21.985110 408464 network_create.go:287] error running [docker network inspect functional-682596]: docker network inspect functional-682596: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-682596 not found
I1217 20:17:21.985123 408464 network_create.go:289] output of [docker network inspect functional-682596]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-682596 not found
** /stderr **
I1217 20:17:21.985223 408464 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 20:17:22.001590 408464 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400194c640}
I1217 20:17:22.001631 408464 network_create.go:124] attempt to create docker network functional-682596 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1217 20:17:22.001700 408464 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-682596 functional-682596
I1217 20:17:22.059151 408464 network_create.go:108] docker network functional-682596 192.168.49.0/24 created
I1217 20:17:22.059174 408464 kic.go:121] calculated static IP "192.168.49.2" for the "functional-682596" container
I1217 20:17:22.059249 408464 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1217 20:17:22.076475 408464 cli_runner.go:164] Run: docker volume create functional-682596 --label name.minikube.sigs.k8s.io=functional-682596 --label created_by.minikube.sigs.k8s.io=true
I1217 20:17:22.096119 408464 oci.go:103] Successfully created a docker volume functional-682596
I1217 20:17:22.096207 408464 cli_runner.go:164] Run: docker run --rm --name functional-682596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-682596 --entrypoint /usr/bin/test -v functional-682596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
I1217 20:17:22.638069 408464 oci.go:107] Successfully prepared a docker volume functional-682596
I1217 20:17:22.638130 408464 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1217 20:17:22.638148 408464 kic.go:194] Starting extracting preloaded images to volume ...
I1217 20:17:22.638211 408464 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-367595/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-682596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
I1217 20:17:26.699436 408464 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-367595/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-682596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (4.061190772s)
I1217 20:17:26.699458 408464 kic.go:203] duration metric: took 4.061316877s to extract preloaded images to volume ...
W1217 20:17:26.699606 408464 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1217 20:17:26.699743 408464 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1217 20:17:26.760160 408464 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-682596 --name functional-682596 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-682596 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-682596 --network functional-682596 --ip 192.168.49.2 --volume functional-682596:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
I1217 20:17:27.084701 408464 cli_runner.go:164] Run: docker container inspect functional-682596 --format={{.State.Running}}
I1217 20:17:27.110049 408464 cli_runner.go:164] Run: docker container inspect functional-682596 --format={{.State.Status}}
I1217 20:17:27.134409 408464 cli_runner.go:164] Run: docker exec functional-682596 stat /var/lib/dpkg/alternatives/iptables
I1217 20:17:27.181682 408464 oci.go:144] the created container "functional-682596" has a running status.
I1217 20:17:27.181701 408464 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-367595/.minikube/machines/functional-682596/id_rsa...
I1217 20:17:27.464023 408464 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-367595/.minikube/machines/functional-682596/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1217 20:17:27.491072 408464 cli_runner.go:164] Run: docker container inspect functional-682596 --format={{.State.Status}}
I1217 20:17:27.515341 408464 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1217 20:17:27.515353 408464 kic_runner.go:114] Args: [docker exec --privileged functional-682596 chown docker:docker /home/docker/.ssh/authorized_keys]
I1217 20:17:27.581169 408464 cli_runner.go:164] Run: docker container inspect functional-682596 --format={{.State.Status}}
I1217 20:17:27.607895 408464 machine.go:94] provisionDockerMachine start ...
I1217 20:17:27.607991 408464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-682596
I1217 20:17:27.628691 408464 main.go:143] libmachine: Using SSH client type: native
I1217 20:17:27.629019 408464 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33163 <nil> <nil>}
I1217 20:17:27.629025 408464 main.go:143] libmachine: About to run SSH command:
hostname
I1217 20:17:27.633580 408464 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1217 20:17:30.763773 408464 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-682596
I1217 20:17:30.763787 408464 ubuntu.go:182] provisioning hostname "functional-682596"
I1217 20:17:30.763850 408464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-682596
I1217 20:17:30.781985 408464 main.go:143] libmachine: Using SSH client type: native
I1217 20:17:30.782415 408464 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33163 <nil> <nil>}
I1217 20:17:30.782428 408464 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-682596 && echo "functional-682596" | sudo tee /etc/hostname
I1217 20:17:30.925683 408464 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-682596
I1217 20:17:30.925754 408464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-682596
I1217 20:17:30.943805 408464 main.go:143] libmachine: Using SSH client type: native
I1217 20:17:30.944107 408464 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33163 <nil> <nil>}
I1217 20:17:30.944123 408464 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-682596' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-682596/g' /etc/hosts;
else
echo '127.0.1.1 functional-682596' | sudo tee -a /etc/hosts;
fi
fi
I1217 20:17:31.077007 408464 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1217 20:17:31.077026 408464 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-367595/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-367595/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-367595/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-367595/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-367595/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-367595/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-367595/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-367595/.minikube}
I1217 20:17:31.077044 408464 ubuntu.go:190] setting up certificates
I1217 20:17:31.077052 408464 provision.go:84] configureAuth start
I1217 20:17:31.077131 408464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-682596
I1217 20:17:31.096361 408464 provision.go:143] copyHostCerts
I1217 20:17:31.096424 408464 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-367595/.minikube/ca.pem, removing ...
I1217 20:17:31.096432 408464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-367595/.minikube/ca.pem
I1217 20:17:31.096514 408464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-367595/.minikube/ca.pem (1082 bytes)
I1217 20:17:31.096614 408464 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-367595/.minikube/cert.pem, removing ...
I1217 20:17:31.096618 408464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-367595/.minikube/cert.pem
I1217 20:17:31.096645 408464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-367595/.minikube/cert.pem (1123 bytes)
I1217 20:17:31.096705 408464 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-367595/.minikube/key.pem, removing ...
I1217 20:17:31.096709 408464 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-367595/.minikube/key.pem
I1217 20:17:31.096738 408464 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-367595/.minikube/key.pem (1679 bytes)
I1217 20:17:31.096808 408464 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-367595/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-367595/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-367595/.minikube/certs/ca-key.pem org=jenkins.functional-682596 san=[127.0.0.1 192.168.49.2 functional-682596 localhost minikube]
I1217 20:17:31.215483 408464 provision.go:177] copyRemoteCerts
I1217 20:17:31.215557 408464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1217 20:17:31.215596 408464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-682596
I1217 20:17:31.232538 408464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-367595/.minikube/machines/functional-682596/id_rsa Username:docker}
I1217 20:17:31.328090 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1217 20:17:31.345537 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1217 20:17:31.362711 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1217 20:17:31.379536 408464 provision.go:87] duration metric: took 302.460445ms to configureAuth
I1217 20:17:31.379554 408464 ubuntu.go:206] setting minikube options for container-runtime
I1217 20:17:31.379749 408464 config.go:182] Loaded profile config "functional-682596": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1217 20:17:31.379755 408464 machine.go:97] duration metric: took 3.771849448s to provisionDockerMachine
I1217 20:17:31.379760 408464 client.go:176] duration metric: took 9.427287397s to LocalClient.Create
I1217 20:17:31.379784 408464 start.go:167] duration metric: took 9.427335906s to libmachine.API.Create "functional-682596"
I1217 20:17:31.379790 408464 start.go:293] postStartSetup for "functional-682596" (driver="docker")
I1217 20:17:31.379800 408464 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1217 20:17:31.379847 408464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1217 20:17:31.379881 408464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-682596
I1217 20:17:31.396723 408464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-367595/.minikube/machines/functional-682596/id_rsa Username:docker}
I1217 20:17:31.496364 408464 ssh_runner.go:195] Run: cat /etc/os-release
I1217 20:17:31.499676 408464 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1217 20:17:31.499694 408464 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1217 20:17:31.499704 408464 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-367595/.minikube/addons for local assets ...
I1217 20:17:31.499761 408464 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-367595/.minikube/files for local assets ...
I1217 20:17:31.499846 408464 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-367595/.minikube/files/etc/ssl/certs/3694612.pem -> 3694612.pem in /etc/ssl/certs
I1217 20:17:31.499926 408464 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-367595/.minikube/files/etc/test/nested/copy/369461/hosts -> hosts in /etc/test/nested/copy/369461
I1217 20:17:31.499972 408464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/369461
I1217 20:17:31.507593 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/files/etc/ssl/certs/3694612.pem --> /etc/ssl/certs/3694612.pem (1708 bytes)
I1217 20:17:31.525966 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/files/etc/test/nested/copy/369461/hosts --> /etc/test/nested/copy/369461/hosts (40 bytes)
I1217 20:17:31.545730 408464 start.go:296] duration metric: took 165.925388ms for postStartSetup
I1217 20:17:31.546127 408464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-682596
I1217 20:17:31.567546 408464 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/config.json ...
I1217 20:17:31.567837 408464 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1217 20:17:31.567879 408464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-682596
I1217 20:17:31.584998 408464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-367595/.minikube/machines/functional-682596/id_rsa Username:docker}
I1217 20:17:31.677573 408464 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1217 20:17:31.682686 408464 start.go:128] duration metric: took 9.733887778s to createHost
I1217 20:17:31.682701 408464 start.go:83] releasing machines lock for "functional-682596", held for 9.733999296s
I1217 20:17:31.682772 408464 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-682596
I1217 20:17:31.704525 408464 out.go:179] * Found network options:
I1217 20:17:31.707558 408464 out.go:179] - HTTP_PROXY=localhost:34181
W1217 20:17:31.710383 408464 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1217 20:17:31.713311 408464 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1217 20:17:31.716190 408464 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/369461.pem (1338 bytes)
W1217 20:17:31.716239 408464 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-367595/.minikube/certs/369461_empty.pem, impossibly tiny 0 bytes
I1217 20:17:31.716314 408464 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/ca-key.pem (1675 bytes)
I1217 20:17:31.716354 408464 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/ca.pem (1082 bytes)
I1217 20:17:31.716380 408464 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/cert.pem (1123 bytes)
I1217 20:17:31.716407 408464 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/key.pem (1679 bytes)
I1217 20:17:31.716451 408464 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-367595/.minikube/files/etc/ssl/certs/3694612.pem (1708 bytes)
I1217 20:17:31.716520 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/files/etc/ssl/certs/3694612.pem --> /usr/share/ca-certificates/3694612.pem (1708 bytes)
I1217 20:17:31.716579 408464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-682596
I1217 20:17:31.733697 408464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-367595/.minikube/machines/functional-682596/id_rsa Username:docker}
I1217 20:17:31.842902 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1217 20:17:31.861156 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/certs/369461.pem --> /usr/share/ca-certificates/369461.pem (1338 bytes)
I1217 20:17:31.881574 408464 ssh_runner.go:195] Run: openssl version
I1217 20:17:31.888051 408464 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/369461.pem
I1217 20:17:31.896077 408464 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/369461.pem /etc/ssl/certs/369461.pem
I1217 20:17:31.903494 408464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/369461.pem
I1217 20:17:31.907204 408464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:17 /usr/share/ca-certificates/369461.pem
I1217 20:17:31.907271 408464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/369461.pem
I1217 20:17:31.948081 408464 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1217 20:17:31.955692 408464 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/369461.pem /etc/ssl/certs/51391683.0
I1217 20:17:31.962750 408464 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3694612.pem
I1217 20:17:31.970050 408464 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3694612.pem /etc/ssl/certs/3694612.pem
I1217 20:17:31.977561 408464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3694612.pem
I1217 20:17:31.981522 408464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:17 /usr/share/ca-certificates/3694612.pem
I1217 20:17:31.981575 408464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3694612.pem
I1217 20:17:32.023052 408464 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1217 20:17:32.031209 408464 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3694612.pem /etc/ssl/certs/3ec20f2e.0
I1217 20:17:32.039068 408464 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1217 20:17:32.047122 408464 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1217 20:17:32.055001 408464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1217 20:17:32.059062 408464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:08 /usr/share/ca-certificates/minikubeCA.pem
I1217 20:17:32.059119 408464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1217 20:17:32.104846 408464 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1217 20:17:32.112502 408464 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1217 20:17:32.121041 408464 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
I1217 20:17:32.124629 408464 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
I1217 20:17:32.127988 408464 ssh_runner.go:195] Run: cat /version.json
I1217 20:17:32.128068 408464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1217 20:17:32.219904 408464 ssh_runner.go:195] Run: systemctl --version
I1217 20:17:32.226268 408464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1217 20:17:32.230643 408464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1217 20:17:32.230716 408464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1217 20:17:32.257253 408464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1217 20:17:32.257278 408464 start.go:496] detecting cgroup driver to use...
I1217 20:17:32.257309 408464 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1217 20:17:32.257369 408464 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1217 20:17:32.272465 408464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1217 20:17:32.285266 408464 docker.go:218] disabling cri-docker service (if available) ...
I1217 20:17:32.285319 408464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1217 20:17:32.302464 408464 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1217 20:17:32.320864 408464 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1217 20:17:32.427231 408464 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1217 20:17:32.549086 408464 docker.go:234] disabling docker service ...
I1217 20:17:32.549147 408464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1217 20:17:32.570903 408464 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1217 20:17:32.584566 408464 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1217 20:17:32.695913 408464 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1217 20:17:32.811758 408464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1217 20:17:32.824697 408464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1217 20:17:32.838329 408464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1217 20:17:32.847167 408464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1217 20:17:32.856134 408464 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1217 20:17:32.856193 408464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1217 20:17:32.864946 408464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1217 20:17:32.874040 408464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1217 20:17:32.883288 408464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1217 20:17:32.892334 408464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1217 20:17:32.900384 408464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1217 20:17:32.909100 408464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1217 20:17:32.917753 408464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1217 20:17:32.926759 408464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1217 20:17:32.934238 408464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1217 20:17:32.941415 408464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 20:17:33.051481 408464 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1217 20:17:33.185216 408464 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1217 20:17:33.185277 408464 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1217 20:17:33.190164 408464 start.go:564] Will wait 60s for crictl version
I1217 20:17:33.190229 408464 ssh_runner.go:195] Run: which crictl
I1217 20:17:33.193950 408464 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1217 20:17:33.217208 408464 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1217 20:17:33.217268 408464 ssh_runner.go:195] Run: containerd --version
I1217 20:17:33.239042 408464 ssh_runner.go:195] Run: containerd --version
I1217 20:17:33.262252 408464 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
I1217 20:17:33.265249 408464 cli_runner.go:164] Run: docker network inspect functional-682596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 20:17:33.284021 408464 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1217 20:17:33.287934 408464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 20:17:33.297762 408464 kubeadm.go:884] updating cluster {Name:functional-682596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-682596 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1217 20:17:33.297893 408464 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1217 20:17:33.297965 408464 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 20:17:33.321874 408464 containerd.go:627] all images are preloaded for containerd runtime.
I1217 20:17:33.321886 408464 containerd.go:534] Images already preloaded, skipping extraction
I1217 20:17:33.321950 408464 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 20:17:33.348051 408464 containerd.go:627] all images are preloaded for containerd runtime.
I1217 20:17:33.348062 408464 cache_images.go:86] Images are preloaded, skipping loading
I1217 20:17:33.348068 408464 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
I1217 20:17:33.348178 408464 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-682596 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-682596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1217 20:17:33.348244 408464 ssh_runner.go:195] Run: sudo crictl info
I1217 20:17:33.373604 408464 cni.go:84] Creating CNI manager for ""
I1217 20:17:33.373612 408464 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1217 20:17:33.373619 408464 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1217 20:17:33.373639 408464 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-682596 NodeName:functional-682596 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1217 20:17:33.373754 408464 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-682596"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-rc.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1217 20:17:33.373819 408464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
I1217 20:17:33.381695 408464 binaries.go:51] Found k8s binaries, skipping transfer
I1217 20:17:33.381758 408464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1217 20:17:33.389618 408464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
I1217 20:17:33.402477 408464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
I1217 20:17:33.415211 408464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
I1217 20:17:33.428039 408464 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1217 20:17:33.432152 408464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1217 20:17:33.442688 408464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 20:17:33.551422 408464 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 20:17:33.569132 408464 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596 for IP: 192.168.49.2
I1217 20:17:33.569142 408464 certs.go:195] generating shared ca certs ...
I1217 20:17:33.569158 408464 certs.go:227] acquiring lock for ca certs: {Name:mk528c7ee25f2f3d78de33f266a77f908cb2a9d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 20:17:33.569295 408464 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-367595/.minikube/ca.key
I1217 20:17:33.569343 408464 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-367595/.minikube/proxy-client-ca.key
I1217 20:17:33.569348 408464 certs.go:257] generating profile certs ...
I1217 20:17:33.569403 408464 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/client.key
I1217 20:17:33.569412 408464 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/client.crt with IP's: []
I1217 20:17:34.102238 408464 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/client.crt ...
I1217 20:17:34.102254 408464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/client.crt: {Name:mk10b4ae3de6bc0fd053aefbfebc11b5e94ecf32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 20:17:34.102460 408464 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/client.key ...
I1217 20:17:34.102466 408464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/client.key: {Name:mkfbebb82964dd97030ecf6f640f403a9688684b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 20:17:34.102554 408464 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/apiserver.key.0c30bf8d
I1217 20:17:34.102567 408464 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/apiserver.crt.0c30bf8d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1217 20:17:34.514245 408464 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/apiserver.crt.0c30bf8d ...
I1217 20:17:34.514261 408464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/apiserver.crt.0c30bf8d: {Name:mk8c85d7b466da52cc57fbefe18f1b22b0f2142b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 20:17:34.514459 408464 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/apiserver.key.0c30bf8d ...
I1217 20:17:34.514468 408464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/apiserver.key.0c30bf8d: {Name:mk3a794df56061feee53f25aee3143027fa0e637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 20:17:34.514549 408464 certs.go:382] copying /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/apiserver.crt.0c30bf8d -> /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/apiserver.crt
I1217 20:17:34.514622 408464 certs.go:386] copying /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/apiserver.key.0c30bf8d -> /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/apiserver.key
I1217 20:17:34.514673 408464 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/proxy-client.key
I1217 20:17:34.514686 408464 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/proxy-client.crt with IP's: []
I1217 20:17:34.732693 408464 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/proxy-client.crt ...
I1217 20:17:34.732708 408464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/proxy-client.crt: {Name:mk4c2ecdd670098ee36bc4877b5429132f71772f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 20:17:34.732897 408464 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/proxy-client.key ...
I1217 20:17:34.732905 408464 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/proxy-client.key: {Name:mkcaa0052c55452343df34bcc47abf04c9c129bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 20:17:34.733090 408464 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/369461.pem (1338 bytes)
W1217 20:17:34.733167 408464 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-367595/.minikube/certs/369461_empty.pem, impossibly tiny 0 bytes
I1217 20:17:34.733174 408464 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/ca-key.pem (1675 bytes)
I1217 20:17:34.733201 408464 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/ca.pem (1082 bytes)
I1217 20:17:34.733224 408464 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/cert.pem (1123 bytes)
I1217 20:17:34.733248 408464 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-367595/.minikube/certs/key.pem (1679 bytes)
I1217 20:17:34.733293 408464 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-367595/.minikube/files/etc/ssl/certs/3694612.pem (1708 bytes)
I1217 20:17:34.733899 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1217 20:17:34.752811 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1217 20:17:34.770536 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1217 20:17:34.788392 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1217 20:17:34.806750 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1217 20:17:34.824539 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1217 20:17:34.842740 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1217 20:17:34.860711 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/profiles/functional-682596/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1217 20:17:34.878384 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/files/etc/ssl/certs/3694612.pem --> /usr/share/ca-certificates/3694612.pem (1708 bytes)
I1217 20:17:34.895186 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1217 20:17:34.912484 408464 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-367595/.minikube/certs/369461.pem --> /usr/share/ca-certificates/369461.pem (1338 bytes)
I1217 20:17:34.930719 408464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1217 20:17:34.943909 408464 ssh_runner.go:195] Run: openssl version
I1217 20:17:34.950422 408464 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3694612.pem
I1217 20:17:34.957881 408464 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3694612.pem /etc/ssl/certs/3694612.pem
I1217 20:17:34.965315 408464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3694612.pem
I1217 20:17:34.969148 408464 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 20:17 /usr/share/ca-certificates/3694612.pem
I1217 20:17:34.969206 408464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3694612.pem
I1217 20:17:35.011866 408464 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1217 20:17:35.020380 408464 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1217 20:17:35.028412 408464 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1217 20:17:35.036483 408464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1217 20:17:35.040994 408464 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 20:08 /usr/share/ca-certificates/minikubeCA.pem
I1217 20:17:35.041050 408464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1217 20:17:35.082880 408464 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1217 20:17:35.090758 408464 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/369461.pem
I1217 20:17:35.098511 408464 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/369461.pem /etc/ssl/certs/369461.pem
I1217 20:17:35.106244 408464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/369461.pem
I1217 20:17:35.110074 408464 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 20:17 /usr/share/ca-certificates/369461.pem
I1217 20:17:35.110139 408464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/369461.pem
I1217 20:17:35.151546 408464 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1217 20:17:35.159189 408464 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1217 20:17:35.162712 408464 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1217 20:17:35.162768 408464 kubeadm.go:401] StartCluster: {Name:functional-682596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-682596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1217 20:17:35.162847 408464 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1217 20:17:35.162908 408464 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1217 20:17:35.194033 408464 cri.go:89] found id: ""
I1217 20:17:35.194106 408464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1217 20:17:35.201873 408464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1217 20:17:35.209373 408464 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1217 20:17:35.209426 408464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1217 20:17:35.217028 408464 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1217 20:17:35.217037 408464 kubeadm.go:158] found existing configuration files:
I1217 20:17:35.217093 408464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1217 20:17:35.224840 408464 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1217 20:17:35.224906 408464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1217 20:17:35.233070 408464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1217 20:17:35.241440 408464 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1217 20:17:35.241495 408464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1217 20:17:35.248882 408464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1217 20:17:35.257370 408464 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1217 20:17:35.257477 408464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1217 20:17:35.265788 408464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1217 20:17:35.274397 408464 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1217 20:17:35.274453 408464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1217 20:17:35.282878 408464 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1217 20:17:35.323242 408464 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
I1217 20:17:35.323292 408464 kubeadm.go:319] [preflight] Running pre-flight checks
I1217 20:17:35.394184 408464 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1217 20:17:35.394262 408464 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1217 20:17:35.394308 408464 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1217 20:17:35.394353 408464 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1217 20:17:35.394402 408464 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1217 20:17:35.394447 408464 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1217 20:17:35.394504 408464 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1217 20:17:35.394574 408464 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1217 20:17:35.394633 408464 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1217 20:17:35.394683 408464 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1217 20:17:35.394733 408464 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1217 20:17:35.394808 408464 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1217 20:17:35.465742 408464 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1217 20:17:35.465845 408464 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1217 20:17:35.465939 408464 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1217 20:17:35.474171 408464 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1217 20:17:35.480628 408464 out.go:252] - Generating certificates and keys ...
I1217 20:17:35.480717 408464 kubeadm.go:319] [certs] Using existing ca certificate authority
I1217 20:17:35.480780 408464 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1217 20:17:35.734680 408464 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1217 20:17:36.263127 408464 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1217 20:17:36.626466 408464 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1217 20:17:37.356805 408464 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1217 20:17:37.545588 408464 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1217 20:17:37.545927 408464 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-682596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1217 20:17:37.745652 408464 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1217 20:17:37.745806 408464 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-682596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1217 20:17:37.888743 408464 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1217 20:17:37.931874 408464 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1217 20:17:38.027072 408464 kubeadm.go:319] [certs] Generating "sa" key and public key
I1217 20:17:38.027399 408464 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1217 20:17:38.192366 408464 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1217 20:17:38.903163 408464 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1217 20:17:39.404352 408464 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1217 20:17:39.670662 408464 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1217 20:17:40.147373 408464 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1217 20:17:40.148028 408464 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1217 20:17:40.151613 408464 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1217 20:17:40.155025 408464 out.go:252] - Booting up control plane ...
I1217 20:17:40.155150 408464 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1217 20:17:40.155238 408464 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1217 20:17:40.156028 408464 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1217 20:17:40.186779 408464 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1217 20:17:40.186880 408464 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1217 20:17:40.195235 408464 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1217 20:17:40.195328 408464 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1217 20:17:40.195371 408464 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1217 20:17:40.340340 408464 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1217 20:17:40.340452 408464 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1217 20:21:40.340538 408464 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000237436s
I1217 20:21:40.340558 408464 kubeadm.go:319]
I1217 20:21:40.340614 408464 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1217 20:21:40.340646 408464 kubeadm.go:319] - The kubelet is not running
I1217 20:21:40.340750 408464 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1217 20:21:40.340753 408464 kubeadm.go:319]
I1217 20:21:40.340856 408464 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1217 20:21:40.340887 408464 kubeadm.go:319] - 'systemctl status kubelet'
I1217 20:21:40.340917 408464 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1217 20:21:40.340920 408464 kubeadm.go:319]
I1217 20:21:40.346040 408464 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1217 20:21:40.347665 408464 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1217 20:21:40.347809 408464 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1217 20:21:40.348050 408464 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1217 20:21:40.348054 408464 kubeadm.go:319]
I1217 20:21:40.348161 408464 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1217 20:21:40.348291 408464 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-682596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-682596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000237436s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1217 20:21:40.348389 408464 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1217 20:21:40.763345 408464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1217 20:21:40.776730 408464 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1217 20:21:40.776787 408464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1217 20:21:40.784760 408464 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1217 20:21:40.784768 408464 kubeadm.go:158] found existing configuration files:
I1217 20:21:40.784817 408464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1217 20:21:40.792613 408464 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1217 20:21:40.792673 408464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1217 20:21:40.799938 408464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1217 20:21:40.808051 408464 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1217 20:21:40.808107 408464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1217 20:21:40.815869 408464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1217 20:21:40.823548 408464 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1217 20:21:40.823616 408464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1217 20:21:40.831013 408464 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1217 20:21:40.838927 408464 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1217 20:21:40.838983 408464 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1217 20:21:40.846562 408464 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1217 20:21:40.884726 408464 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
I1217 20:21:40.885008 408464 kubeadm.go:319] [preflight] Running pre-flight checks
I1217 20:21:40.961521 408464 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1217 20:21:40.961585 408464 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1217 20:21:40.961620 408464 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1217 20:21:40.961666 408464 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1217 20:21:40.961714 408464 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1217 20:21:40.961760 408464 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1217 20:21:40.961807 408464 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1217 20:21:40.961854 408464 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1217 20:21:40.961901 408464 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1217 20:21:40.961945 408464 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1217 20:21:40.961992 408464 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1217 20:21:40.962037 408464 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1217 20:21:41.033198 408464 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1217 20:21:41.033325 408464 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1217 20:21:41.033432 408464 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1217 20:21:41.040705 408464 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1217 20:21:41.046212 408464 out.go:252] - Generating certificates and keys ...
I1217 20:21:41.046320 408464 kubeadm.go:319] [certs] Using existing ca certificate authority
I1217 20:21:41.046404 408464 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1217 20:21:41.046501 408464 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1217 20:21:41.046571 408464 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1217 20:21:41.046647 408464 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1217 20:21:41.046704 408464 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1217 20:21:41.046770 408464 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1217 20:21:41.046834 408464 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1217 20:21:41.046913 408464 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1217 20:21:41.046989 408464 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1217 20:21:41.047028 408464 kubeadm.go:319] [certs] Using the existing "sa" key
I1217 20:21:41.047091 408464 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1217 20:21:41.186421 408464 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1217 20:21:41.352321 408464 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1217 20:21:41.659899 408464 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1217 20:21:41.780571 408464 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1217 20:21:41.907324 408464 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1217 20:21:41.907929 408464 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1217 20:21:41.911536 408464 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1217 20:21:41.914828 408464 out.go:252] - Booting up control plane ...
I1217 20:21:41.914926 408464 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1217 20:21:41.915003 408464 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1217 20:21:41.915715 408464 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1217 20:21:41.936212 408464 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1217 20:21:41.936423 408464 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1217 20:21:41.945262 408464 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1217 20:21:41.945549 408464 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1217 20:21:41.945738 408464 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1217 20:21:42.092879 408464 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1217 20:21:42.093114 408464 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1217 20:25:42.088415 408464 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000252088s
I1217 20:25:42.088438 408464 kubeadm.go:319]
I1217 20:25:42.088494 408464 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1217 20:25:42.088527 408464 kubeadm.go:319] - The kubelet is not running
I1217 20:25:42.088631 408464 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1217 20:25:42.088635 408464 kubeadm.go:319]
I1217 20:25:42.088738 408464 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1217 20:25:42.088770 408464 kubeadm.go:319] - 'systemctl status kubelet'
I1217 20:25:42.088800 408464 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1217 20:25:42.088803 408464 kubeadm.go:319]
I1217 20:25:42.094846 408464 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1217 20:25:42.095369 408464 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1217 20:25:42.095514 408464 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1217 20:25:42.095757 408464 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1217 20:25:42.095762 408464 kubeadm.go:319]
I1217 20:25:42.095910 408464 kubeadm.go:403] duration metric: took 8m6.933150928s to StartCluster
I1217 20:25:42.095948 408464 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1217 20:25:42.095981 408464 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1217 20:25:42.096094 408464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1217 20:25:42.125693 408464 cri.go:89] found id: ""
I1217 20:25:42.125709 408464 logs.go:282] 0 containers: []
W1217 20:25:42.125719 408464 logs.go:284] No container was found matching "kube-apiserver"
I1217 20:25:42.125726 408464 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1217 20:25:42.125948 408464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1217 20:25:42.158251 408464 cri.go:89] found id: ""
I1217 20:25:42.158266 408464 logs.go:282] 0 containers: []
W1217 20:25:42.158274 408464 logs.go:284] No container was found matching "etcd"
I1217 20:25:42.158281 408464 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1217 20:25:42.158354 408464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1217 20:25:42.188180 408464 cri.go:89] found id: ""
I1217 20:25:42.188196 408464 logs.go:282] 0 containers: []
W1217 20:25:42.188204 408464 logs.go:284] No container was found matching "coredns"
I1217 20:25:42.188210 408464 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1217 20:25:42.188308 408464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1217 20:25:42.216858 408464 cri.go:89] found id: ""
I1217 20:25:42.216874 408464 logs.go:282] 0 containers: []
W1217 20:25:42.216882 408464 logs.go:284] No container was found matching "kube-scheduler"
I1217 20:25:42.216887 408464 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1217 20:25:42.216958 408464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1217 20:25:42.253901 408464 cri.go:89] found id: ""
I1217 20:25:42.253916 408464 logs.go:282] 0 containers: []
W1217 20:25:42.253924 408464 logs.go:284] No container was found matching "kube-proxy"
I1217 20:25:42.253930 408464 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1217 20:25:42.254000 408464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1217 20:25:42.284974 408464 cri.go:89] found id: ""
I1217 20:25:42.284988 408464 logs.go:282] 0 containers: []
W1217 20:25:42.284996 408464 logs.go:284] No container was found matching "kube-controller-manager"
I1217 20:25:42.285001 408464 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1217 20:25:42.285063 408464 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1217 20:25:42.318584 408464 cri.go:89] found id: ""
I1217 20:25:42.318597 408464 logs.go:282] 0 containers: []
W1217 20:25:42.318605 408464 logs.go:284] No container was found matching "kindnet"
I1217 20:25:42.318614 408464 logs.go:123] Gathering logs for dmesg ...
I1217 20:25:42.318651 408464 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1217 20:25:42.334655 408464 logs.go:123] Gathering logs for describe nodes ...
I1217 20:25:42.334672 408464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1217 20:25:42.399188 408464 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1217 20:25:42.390062 4838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 20:25:42.390716 4838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 20:25:42.392768 4838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 20:25:42.393411 4838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 20:25:42.395142 4838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1217 20:25:42.390062 4838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 20:25:42.390716 4838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 20:25:42.392768 4838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 20:25:42.393411 4838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 20:25:42.395142 4838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1217 20:25:42.399200 408464 logs.go:123] Gathering logs for containerd ...
I1217 20:25:42.399210 408464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1217 20:25:42.438269 408464 logs.go:123] Gathering logs for container status ...
I1217 20:25:42.438288 408464 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1217 20:25:42.470620 408464 logs.go:123] Gathering logs for kubelet ...
I1217 20:25:42.470637 408464 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1217 20:25:42.527492 408464 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000252088s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1217 20:25:42.527537 408464 out.go:285] *
W1217 20:25:42.527602 408464 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000252088s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1217 20:25:42.527679 408464 out.go:285] *
W1217 20:25:42.529802 408464 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1217 20:25:42.535585 408464 out.go:203]
W1217 20:25:42.539273 408464 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000252088s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1217 20:25:42.539324 408464 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1217 20:25:42.539346 408464 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1217 20:25:42.543078 408464 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.126766889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.126785712Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.126836913Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.126853397Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.126874082Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.127012004Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.127034593Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.127052102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.127074536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.127112444Z" level=info msg="Connect containerd service"
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.127526872Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.128317362Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.139998451Z" level=info msg="Start subscribing containerd event"
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.140234146Z" level=info msg="Start recovering state"
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.140044138Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.140549382Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.181957692Z" level=info msg="Start event monitor"
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.182010460Z" level=info msg="Start cni network conf syncer for default"
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.182020421Z" level=info msg="Start streaming server"
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.182034986Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.182044807Z" level=info msg="runtime interface starting up..."
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.182050887Z" level=info msg="starting plugins..."
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.182063228Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 17 20:17:33 functional-682596 containerd[807]: time="2025-12-17T20:17:33.182352470Z" level=info msg="containerd successfully booted in 0.080223s"
Dec 17 20:17:33 functional-682596 systemd[1]: Started containerd.service - containerd container runtime.
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1217 20:25:43.509425 4956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 20:25:43.509815 4956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 20:25:43.511450 4956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 20:25:43.511951 4956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1217 20:25:43.513539 4956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec17 17:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.015536] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.514164] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.034184] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.806183] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.649674] kauditd_printk_skb: 36 callbacks suppressed
[Dec17 19:37] hrtimer: interrupt took 15014583 ns
[Dec17 19:39] kauditd_printk_skb: 8 callbacks suppressed
[Dec17 20:06] kauditd_printk_skb: 8 callbacks suppressed
[Dec17 20:17] FS-Cache: Duplicate cookie detected
[ +0.000767] FS-Cache: O-cookie c=00000031 [p=00000002 fl=222 nc=0 na=1]
[ +0.001036] FS-Cache: O-cookie d=00000000b1f70094{9P.session} n=000000004124fba5
[ +0.001177] FS-Cache: O-key=[10] '34323937353834383437'
[ +0.000816] FS-Cache: N-cookie c=00000032 [p=00000002 fl=2 nc=0 na=1]
[ +0.001043] FS-Cache: N-cookie d=00000000b1f70094{9P.session} n=000000009cece4cf
[ +0.001160] FS-Cache: N-key=[10] '34323937353834383437'
==> kernel <==
20:25:43 up 3:08, 0 user, load average: 0.38, 0.54, 1.04
Linux functional-682596 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 17 20:25:40 functional-682596 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 17 20:25:40 functional-682596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 17 20:25:40 functional-682596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 20:25:40 functional-682596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 20:25:40 functional-682596 kubelet[4763]: E1217 20:25:40.781915 4763 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 17 20:25:40 functional-682596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 20:25:40 functional-682596 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 17 20:25:41 functional-682596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 17 20:25:41 functional-682596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 20:25:41 functional-682596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 20:25:41 functional-682596 kubelet[4768]: E1217 20:25:41.536868 4768 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 17 20:25:41 functional-682596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 20:25:41 functional-682596 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 17 20:25:42 functional-682596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 17 20:25:42 functional-682596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 20:25:42 functional-682596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 20:25:42 functional-682596 kubelet[4809]: E1217 20:25:42.311973 4809 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 17 20:25:42 functional-682596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 20:25:42 functional-682596 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 17 20:25:42 functional-682596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 17 20:25:42 functional-682596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 20:25:42 functional-682596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 17 20:25:43 functional-682596 kubelet[4872]: E1217 20:25:43.058057 4872 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 17 20:25:43 functional-682596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 17 20:25:43 functional-682596 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-682596 -n functional-682596
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-682596 -n functional-682596: exit status 6 (348.063149ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1217 20:25:43.981842 414214 status.go:458] kubeconfig endpoint: get endpoint: "functional-682596" does not appear in /home/jenkins/minikube-integration/21808-367595/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-682596" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (502.31s)