=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1218 00:21:09.080983 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:23:25.214406 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:23:52.929883 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.397238 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.403636 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.415115 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.436649 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.478107 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.559646 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:04.721203 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:05.043000 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:05.685160 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:06.966580 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:09.529481 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:14.650926 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:24.892588 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:25:45.374812 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:26:26.336206 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:27:48.260338 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-739047/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1218 00:28:25.214532 1261148 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/addons-006416/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m16.50082275s)
-- stdout --
* [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22186
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-232602" primary control-plane node in "functional-232602" cluster
* Pulling base image v0.0.48-1765966054-22186 ...
* Found network options:
- HTTP_PROXY=localhost:42501
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:42501 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-232602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-232602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000229243s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000195767s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000195767s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-232602
helpers_test.go:244: (dbg) docker inspect functional-232602:
-- stdout --
[
{
"Id": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
"Created": "2025-12-18T00:20:52.193636538Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1300116,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-18T00:20:52.255390589Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:1411dfa4fea1291ce69fcd55acb99f3fbff3e701cee30fdd4f0b2561ac0ef6b0",
"ResolvConfPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hostname",
"HostsPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/hosts",
"LogPath": "/var/lib/docker/containers/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d/99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d-json.log",
"Name": "/functional-232602",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-232602:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-232602",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "99b81787dd55ed4964998c5ac0b7ce375f45cc79dae13c82eb09d595c609439d",
"LowerDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29-init/diff:/var/lib/docker/overlay2/3b9a10f83a9a877b53e524808c1a4110693d1db18cdd0adad88884059d01c267/diff",
"MergedDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/merged",
"UpperDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/diff",
"WorkDir": "/var/lib/docker/overlay2/3b747aad1e32a8ce93b0cffdee880e12cc148ecb38623dc0950859e672e17c29/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-232602",
"Source": "/var/lib/docker/volumes/functional-232602/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-232602",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-232602",
"name.minikube.sigs.k8s.io": "functional-232602",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "e580e3c37349f21b860477947e2e65b29ee2e4d0bd26375f18420643185d4bf3",
"SandboxKey": "/var/run/docker/netns/e580e3c37349",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33902"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33903"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33906"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33904"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33905"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-232602": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "16:b2:23:bb:20:4f",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "e977d303b3e8a265564d77d86237a37bcecbdaf8825c11bf07631499bf522bb4",
"EndpointID": "437f32623e939d835264b6530be25d611a53c251fff2c87b5d4e8cb7b76c91ca",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-232602",
"99b81787dd55"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-232602 -n functional-232602: exit status 6 (326.24102ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1218 00:29:04.044967 1305188 status.go:458] kubeconfig endpoint: get endpoint: "functional-232602" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-232602 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-739047 ssh findmnt -T /mount-9p | grep 9p │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ │
│ mount │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdspecific-port1226370769/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ │
│ ssh │ functional-739047 ssh findmnt -T /mount-9p | grep 9p │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ ssh │ functional-739047 ssh -- ls -la /mount-9p │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ ssh │ functional-739047 ssh sudo umount -f /mount-9p │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ │
│ mount │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount3 --alsologtostderr -v=1 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ │
│ mount │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount2 --alsologtostderr -v=1 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ │
│ ssh │ functional-739047 ssh findmnt -T /mount1 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ │
│ mount │ -p functional-739047 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2209556534/001:/mount1 --alsologtostderr -v=1 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ │
│ ssh │ functional-739047 ssh findmnt -T /mount1 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ ssh │ functional-739047 ssh findmnt -T /mount2 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ ssh │ functional-739047 ssh findmnt -T /mount3 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ mount │ -p functional-739047 --kill=true │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ │
│ update-context │ functional-739047 update-context --alsologtostderr -v=2 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ update-context │ functional-739047 update-context --alsologtostderr -v=2 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ update-context │ functional-739047 update-context --alsologtostderr -v=2 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ image │ functional-739047 image ls --format short --alsologtostderr │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ image │ functional-739047 image ls --format yaml --alsologtostderr │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ ssh │ functional-739047 ssh pgrep buildkitd │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ │
│ image │ functional-739047 image ls --format json --alsologtostderr │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ image │ functional-739047 image ls --format table --alsologtostderr │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ image │ functional-739047 image build -t localhost/my-image:functional-739047 testdata/build --alsologtostderr │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ image │ functional-739047 image ls │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ delete │ -p functional-739047 │ functional-739047 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ 18 Dec 25 00:20 UTC │
│ start │ -p functional-232602 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-232602 │ jenkins │ v1.37.0 │ 18 Dec 25 00:20 UTC │ │
└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/18 00:20:47
Running on machine: ip-172-31-29-130
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1218 00:20:47.258293 1299733 out.go:360] Setting OutFile to fd 1 ...
I1218 00:20:47.258419 1299733 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:20:47.258423 1299733 out.go:374] Setting ErrFile to fd 2...
I1218 00:20:47.258427 1299733 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1218 00:20:47.258666 1299733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-1259289/.minikube/bin
I1218 00:20:47.259075 1299733 out.go:368] Setting JSON to false
I1218 00:20:47.259901 1299733 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":25394,"bootTime":1765991854,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I1218 00:20:47.259958 1299733 start.go:143] virtualization:
I1218 00:20:47.264515 1299733 out.go:179] * [functional-232602] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1218 00:20:47.269243 1299733 out.go:179] - MINIKUBE_LOCATION=22186
I1218 00:20:47.269373 1299733 notify.go:221] Checking for updates...
I1218 00:20:47.275932 1299733 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1218 00:20:47.279143 1299733 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22186-1259289/kubeconfig
I1218 00:20:47.282303 1299733 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-1259289/.minikube
I1218 00:20:47.285454 1299733 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1218 00:20:47.288599 1299733 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1218 00:20:47.291902 1299733 driver.go:422] Setting default libvirt URI to qemu:///system
I1218 00:20:47.316802 1299733 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1218 00:20:47.316909 1299733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1218 00:20:47.378706 1299733 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-18 00:20:47.369615373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1218 00:20:47.378798 1299733 docker.go:319] overlay module found
I1218 00:20:47.382049 1299733 out.go:179] * Using the docker driver based on user configuration
I1218 00:20:47.385057 1299733 start.go:309] selected driver: docker
I1218 00:20:47.385065 1299733 start.go:927] validating driver "docker" against <nil>
I1218 00:20:47.385076 1299733 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1218 00:20:47.385829 1299733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1218 00:20:47.441404 1299733 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-18 00:20:47.431787837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1218 00:20:47.441557 1299733 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1218 00:20:47.441775 1299733 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1218 00:20:47.444924 1299733 out.go:179] * Using Docker driver with root privileges
I1218 00:20:47.447880 1299733 cni.go:84] Creating CNI manager for ""
I1218 00:20:47.447934 1299733 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1218 00:20:47.447941 1299733 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1218 00:20:47.448009 1299733 start.go:353] cluster config:
{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1218 00:20:47.452982 1299733 out.go:179] * Starting "functional-232602" primary control-plane node in "functional-232602" cluster
I1218 00:20:47.455922 1299733 cache.go:134] Beginning downloading kic base image for docker with containerd
I1218 00:20:47.458854 1299733 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
I1218 00:20:47.461757 1299733 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1218 00:20:47.461797 1299733 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
I1218 00:20:47.461817 1299733 cache.go:65] Caching tarball of preloaded images
I1218 00:20:47.461848 1299733 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
I1218 00:20:47.461906 1299733 preload.go:238] Found /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1218 00:20:47.461915 1299733 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
I1218 00:20:47.462244 1299733 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
I1218 00:20:47.462262 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json: {Name:mk0e5327bdfc651586437cd1e3d43df2deb645ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1218 00:20:47.482082 1299733 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
I1218 00:20:47.482093 1299733 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
I1218 00:20:47.482112 1299733 cache.go:243] Successfully downloaded all kic artifacts
I1218 00:20:47.482141 1299733 start.go:360] acquireMachinesLock for functional-232602: {Name:mk4ce6af07590f49ce59e55805078a5994d40eaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1218 00:20:47.482276 1299733 start.go:364] duration metric: took 120.424µs to acquireMachinesLock for "functional-232602"
I1218 00:20:47.482301 1299733 start.go:93] Provisioning new machine with config: &{Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1218 00:20:47.482363 1299733 start.go:125] createHost starting for "" (driver="docker")
I1218 00:20:47.485796 1299733 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1218 00:20:47.486093 1299733 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:42501 to docker env.
I1218 00:20:47.486117 1299733 start.go:159] libmachine.API.Create for "functional-232602" (driver="docker")
I1218 00:20:47.486139 1299733 client.go:173] LocalClient.Create starting
I1218 00:20:47.486211 1299733 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem
I1218 00:20:47.486244 1299733 main.go:143] libmachine: Decoding PEM data...
I1218 00:20:47.486261 1299733 main.go:143] libmachine: Parsing certificate...
I1218 00:20:47.486320 1299733 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem
I1218 00:20:47.486337 1299733 main.go:143] libmachine: Decoding PEM data...
I1218 00:20:47.486347 1299733 main.go:143] libmachine: Parsing certificate...
I1218 00:20:47.486702 1299733 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1218 00:20:47.503453 1299733 cli_runner.go:211] docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1218 00:20:47.503546 1299733 network_create.go:284] running [docker network inspect functional-232602] to gather additional debugging logs...
I1218 00:20:47.503570 1299733 cli_runner.go:164] Run: docker network inspect functional-232602
W1218 00:20:47.520235 1299733 cli_runner.go:211] docker network inspect functional-232602 returned with exit code 1
I1218 00:20:47.520255 1299733 network_create.go:287] error running [docker network inspect functional-232602]: docker network inspect functional-232602: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-232602 not found
I1218 00:20:47.520267 1299733 network_create.go:289] output of [docker network inspect functional-232602]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-232602 not found
** /stderr **
I1218 00:20:47.520355 1299733 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1218 00:20:47.537086 1299733 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018fbb90}
I1218 00:20:47.537127 1299733 network_create.go:124] attempt to create docker network functional-232602 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1218 00:20:47.537184 1299733 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-232602 functional-232602
I1218 00:20:47.595672 1299733 network_create.go:108] docker network functional-232602 192.168.49.0/24 created
I1218 00:20:47.595700 1299733 kic.go:121] calculated static IP "192.168.49.2" for the "functional-232602" container
I1218 00:20:47.595777 1299733 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1218 00:20:47.612034 1299733 cli_runner.go:164] Run: docker volume create functional-232602 --label name.minikube.sigs.k8s.io=functional-232602 --label created_by.minikube.sigs.k8s.io=true
I1218 00:20:47.630758 1299733 oci.go:103] Successfully created a docker volume functional-232602
I1218 00:20:47.630835 1299733 cli_runner.go:164] Run: docker run --rm --name functional-232602-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-232602 --entrypoint /usr/bin/test -v functional-232602:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
I1218 00:20:48.197202 1299733 oci.go:107] Successfully prepared a docker volume functional-232602
I1218 00:20:48.197264 1299733 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1218 00:20:48.197272 1299733 kic.go:194] Starting extracting preloaded images to volume ...
I1218 00:20:48.197358 1299733 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-232602:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
I1218 00:20:52.120424 1299733 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-1259289/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-232602:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.923032655s)
I1218 00:20:52.120446 1299733 kic.go:203] duration metric: took 3.923171385s to extract preloaded images to volume ...
W1218 00:20:52.120588 1299733 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1218 00:20:52.120721 1299733 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1218 00:20:52.178767 1299733 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-232602 --name functional-232602 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-232602 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-232602 --network functional-232602 --ip 192.168.49.2 --volume functional-232602:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
I1218 00:20:52.465063 1299733 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Running}}
I1218 00:20:52.489888 1299733 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
I1218 00:20:52.508863 1299733 cli_runner.go:164] Run: docker exec functional-232602 stat /var/lib/dpkg/alternatives/iptables
I1218 00:20:52.560963 1299733 oci.go:144] the created container "functional-232602" has a running status.
I1218 00:20:52.560982 1299733 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa...
I1218 00:20:53.371070 1299733 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1218 00:20:53.397615 1299733 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
I1218 00:20:53.417874 1299733 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1218 00:20:53.417885 1299733 kic_runner.go:114] Args: [docker exec --privileged functional-232602 chown docker:docker /home/docker/.ssh/authorized_keys]
I1218 00:20:53.464219 1299733 cli_runner.go:164] Run: docker container inspect functional-232602 --format={{.State.Status}}
I1218 00:20:53.489619 1299733 machine.go:94] provisionDockerMachine start ...
I1218 00:20:53.489711 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:20:53.509494 1299733 main.go:143] libmachine: Using SSH client type: native
I1218 00:20:53.509871 1299733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33902 <nil> <nil>}
I1218 00:20:53.509878 1299733 main.go:143] libmachine: About to run SSH command:
hostname
I1218 00:20:53.676099 1299733 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
I1218 00:20:53.676127 1299733 ubuntu.go:182] provisioning hostname "functional-232602"
I1218 00:20:53.676199 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:20:53.697836 1299733 main.go:143] libmachine: Using SSH client type: native
I1218 00:20:53.698126 1299733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33902 <nil> <nil>}
I1218 00:20:53.698134 1299733 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-232602 && echo "functional-232602" | sudo tee /etc/hostname
I1218 00:20:53.873952 1299733 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-232602
I1218 00:20:53.874032 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:20:53.890932 1299733 main.go:143] libmachine: Using SSH client type: native
I1218 00:20:53.891244 1299733 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33902 <nil> <nil>}
I1218 00:20:53.891261 1299733 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-232602' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-232602/g' /etc/hosts;
else
echo '127.0.1.1 functional-232602' | sudo tee -a /etc/hosts;
fi
fi
I1218 00:20:54.044973 1299733 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1218 00:20:54.044990 1299733 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-1259289/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-1259289/.minikube}
I1218 00:20:54.045008 1299733 ubuntu.go:190] setting up certificates
I1218 00:20:54.045016 1299733 provision.go:84] configureAuth start
I1218 00:20:54.045076 1299733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
I1218 00:20:54.062459 1299733 provision.go:143] copyHostCerts
I1218 00:20:54.062522 1299733 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem, removing ...
I1218 00:20:54.062530 1299733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem
I1218 00:20:54.062609 1299733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.pem (1082 bytes)
I1218 00:20:54.062707 1299733 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem, removing ...
I1218 00:20:54.062711 1299733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem
I1218 00:20:54.062736 1299733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/cert.pem (1123 bytes)
I1218 00:20:54.062794 1299733 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem, removing ...
I1218 00:20:54.062797 1299733 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem
I1218 00:20:54.062821 1299733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-1259289/.minikube/key.pem (1675 bytes)
I1218 00:20:54.062879 1299733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem org=jenkins.functional-232602 san=[127.0.0.1 192.168.49.2 functional-232602 localhost minikube]
I1218 00:20:54.190472 1299733 provision.go:177] copyRemoteCerts
I1218 00:20:54.190523 1299733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1218 00:20:54.190569 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:20:54.208147 1299733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
I1218 00:20:54.316446 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1218 00:20:54.335031 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1218 00:20:54.352732 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1218 00:20:54.369946 1299733 provision.go:87] duration metric: took 324.879384ms to configureAuth
I1218 00:20:54.369963 1299733 ubuntu.go:206] setting minikube options for container-runtime
I1218 00:20:54.370172 1299733 config.go:182] Loaded profile config "functional-232602": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1218 00:20:54.370177 1299733 machine.go:97] duration metric: took 880.548208ms to provisionDockerMachine
I1218 00:20:54.370185 1299733 client.go:176] duration metric: took 6.884042324s to LocalClient.Create
I1218 00:20:54.370201 1299733 start.go:167] duration metric: took 6.88408504s to libmachine.API.Create "functional-232602"
I1218 00:20:54.370207 1299733 start.go:293] postStartSetup for "functional-232602" (driver="docker")
I1218 00:20:54.370217 1299733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1218 00:20:54.370310 1299733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1218 00:20:54.370357 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:20:54.387674 1299733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
I1218 00:20:54.496664 1299733 ssh_runner.go:195] Run: cat /etc/os-release
I1218 00:20:54.499873 1299733 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1218 00:20:54.499891 1299733 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1218 00:20:54.499902 1299733 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/addons for local assets ...
I1218 00:20:54.499958 1299733 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-1259289/.minikube/files for local assets ...
I1218 00:20:54.500044 1299733 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem -> 12611482.pem in /etc/ssl/certs
I1218 00:20:54.500121 1299733 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts -> hosts in /etc/test/nested/copy/1261148
I1218 00:20:54.500178 1299733 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1261148
I1218 00:20:54.507585 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /etc/ssl/certs/12611482.pem (1708 bytes)
I1218 00:20:54.525591 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/test/nested/copy/1261148/hosts --> /etc/test/nested/copy/1261148/hosts (40 bytes)
I1218 00:20:54.543747 1299733 start.go:296] duration metric: took 173.52649ms for postStartSetup
I1218 00:20:54.544118 1299733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
I1218 00:20:54.561132 1299733 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/config.json ...
I1218 00:20:54.561409 1299733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1218 00:20:54.561447 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:20:54.578288 1299733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
I1218 00:20:54.681579 1299733 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1218 00:20:54.686248 1299733 start.go:128] duration metric: took 7.203869534s to createHost
I1218 00:20:54.686263 1299733 start.go:83] releasing machines lock for "functional-232602", held for 7.203978964s
I1218 00:20:54.686332 1299733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-232602
I1218 00:20:54.708820 1299733 out.go:179] * Found network options:
I1218 00:20:54.711789 1299733 out.go:179] - HTTP_PROXY=localhost:42501
W1218 00:20:54.714647 1299733 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1218 00:20:54.717612 1299733 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1218 00:20:54.720543 1299733 ssh_runner.go:195] Run: cat /version.json
I1218 00:20:54.720587 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:20:54.720644 1299733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1218 00:20:54.720703 1299733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-232602
I1218 00:20:54.738768 1299733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
I1218 00:20:54.740697 1299733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/22186-1259289/.minikube/machines/functional-232602/id_rsa Username:docker}
I1218 00:20:54.844212 1299733 ssh_runner.go:195] Run: systemctl --version
I1218 00:20:54.937885 1299733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1218 00:20:54.942554 1299733 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1218 00:20:54.942627 1299733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1218 00:20:54.969774 1299733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1218 00:20:54.969789 1299733 start.go:496] detecting cgroup driver to use...
I1218 00:20:54.969830 1299733 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1218 00:20:54.969882 1299733 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1218 00:20:54.984329 1299733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1218 00:20:54.996960 1299733 docker.go:218] disabling cri-docker service (if available) ...
I1218 00:20:54.997013 1299733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1218 00:20:55.041871 1299733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1218 00:20:55.070165 1299733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1218 00:20:55.214048 1299733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1218 00:20:55.338714 1299733 docker.go:234] disabling docker service ...
I1218 00:20:55.338772 1299733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1218 00:20:55.361776 1299733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1218 00:20:55.375748 1299733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1218 00:20:55.492547 1299733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1218 00:20:55.601361 1299733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1218 00:20:55.614096 1299733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1218 00:20:55.627575 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1218 00:20:55.636509 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1218 00:20:55.645793 1299733 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1218 00:20:55.645868 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1218 00:20:55.654853 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1218 00:20:55.663826 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1218 00:20:55.672369 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1218 00:20:55.681023 1299733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1218 00:20:55.689350 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1218 00:20:55.697956 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1218 00:20:55.706607 1299733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1218 00:20:55.716460 1299733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1218 00:20:55.723996 1299733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1218 00:20:55.731555 1299733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1218 00:20:55.845481 1299733 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1218 00:20:55.980328 1299733 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1218 00:20:55.980396 1299733 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1218 00:20:55.984484 1299733 start.go:564] Will wait 60s for crictl version
I1218 00:20:55.984541 1299733 ssh_runner.go:195] Run: which crictl
I1218 00:20:55.988184 1299733 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1218 00:20:56.014215 1299733 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1218 00:20:56.014281 1299733 ssh_runner.go:195] Run: containerd --version
I1218 00:20:56.038843 1299733 ssh_runner.go:195] Run: containerd --version
I1218 00:20:56.066737 1299733 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
I1218 00:20:56.069804 1299733 cli_runner.go:164] Run: docker network inspect functional-232602 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1218 00:20:56.086697 1299733 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1218 00:20:56.090661 1299733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1218 00:20:56.100562 1299733 kubeadm.go:884] updating cluster {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1218 00:20:56.100695 1299733 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1218 00:20:56.100759 1299733 ssh_runner.go:195] Run: sudo crictl images --output json
I1218 00:20:56.125716 1299733 containerd.go:627] all images are preloaded for containerd runtime.
I1218 00:20:56.125729 1299733 containerd.go:534] Images already preloaded, skipping extraction
I1218 00:20:56.125794 1299733 ssh_runner.go:195] Run: sudo crictl images --output json
I1218 00:20:56.153103 1299733 containerd.go:627] all images are preloaded for containerd runtime.
I1218 00:20:56.153116 1299733 cache_images.go:86] Images are preloaded, skipping loading
I1218 00:20:56.153123 1299733 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
I1218 00:20:56.153214 1299733 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-232602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1218 00:20:56.153283 1299733 ssh_runner.go:195] Run: sudo crictl info
I1218 00:20:56.176942 1299733 cni.go:84] Creating CNI manager for ""
I1218 00:20:56.176953 1299733 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1218 00:20:56.176968 1299733 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1218 00:20:56.176989 1299733 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-232602 NodeName:functional-232602 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1218 00:20:56.177097 1299733 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-232602"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-rc.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1218 00:20:56.177163 1299733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
I1218 00:20:56.185490 1299733 binaries.go:51] Found k8s binaries, skipping transfer
I1218 00:20:56.185556 1299733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1218 00:20:56.194059 1299733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
I1218 00:20:56.207490 1299733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
I1218 00:20:56.220180 1299733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
I1218 00:20:56.233327 1299733 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1218 00:20:56.237102 1299733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1218 00:20:56.247077 1299733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1218 00:20:56.354727 1299733 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1218 00:20:56.370734 1299733 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602 for IP: 192.168.49.2
I1218 00:20:56.370745 1299733 certs.go:195] generating shared ca certs ...
I1218 00:20:56.370759 1299733 certs.go:227] acquiring lock for ca certs: {Name:mkd65d7b93cb07d64af3e4c69ea56b7638b14699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1218 00:20:56.370899 1299733 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key
I1218 00:20:56.370941 1299733 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key
I1218 00:20:56.370947 1299733 certs.go:257] generating profile certs ...
I1218 00:20:56.371009 1299733 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key
I1218 00:20:56.371018 1299733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt with IP's: []
I1218 00:20:56.572670 1299733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt ...
I1218 00:20:56.572688 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.crt: {Name:mk002b9fd89396a08ba8aeecbad98a7698da5b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1218 00:20:56.572891 1299733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key ...
I1218 00:20:56.572897 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/client.key: {Name:mkc8c019d2d5154bb1375f4761c3e2dfc2d15280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1218 00:20:56.572994 1299733 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8
I1218 00:20:56.573006 1299733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt.37b948f8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1218 00:20:56.803308 1299733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt.37b948f8 ...
I1218 00:20:56.803324 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt.37b948f8: {Name:mk03c8ba32fdbe7b0ee88e382a35aa5c6df473b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1218 00:20:56.803512 1299733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8 ...
I1218 00:20:56.803520 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8: {Name:mkd1d489170e43b3ee96768b44fda4a7baa0a1e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1218 00:20:56.803604 1299733 certs.go:382] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt.37b948f8 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt
I1218 00:20:56.803685 1299733 certs.go:386] copying /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key.37b948f8 -> /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key
I1218 00:20:56.803738 1299733 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key
I1218 00:20:56.803750 1299733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt with IP's: []
I1218 00:20:56.972062 1299733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt ...
I1218 00:20:56.972078 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt: {Name:mkc5a73242ecc19a02344f7df5b3bfc837658efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1218 00:20:56.972263 1299733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key ...
I1218 00:20:56.972280 1299733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key: {Name:mk22ccb603e4f887e66caba9e8f646be1037eda3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1218 00:20:56.972476 1299733 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem (1338 bytes)
W1218 00:20:56.972517 1299733 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148_empty.pem, impossibly tiny 0 bytes
I1218 00:20:56.972525 1299733 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca-key.pem (1675 bytes)
I1218 00:20:56.972549 1299733 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/ca.pem (1082 bytes)
I1218 00:20:56.972571 1299733 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/cert.pem (1123 bytes)
I1218 00:20:56.972594 1299733 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/key.pem (1675 bytes)
I1218 00:20:56.972660 1299733 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem (1708 bytes)
I1218 00:20:56.973219 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1218 00:20:56.992512 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1218 00:20:57.014831 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1218 00:20:57.033965 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1218 00:20:57.051553 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1218 00:20:57.069634 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1218 00:20:57.086917 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1218 00:20:57.104759 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/profiles/functional-232602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1218 00:20:57.123062 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/certs/1261148.pem --> /usr/share/ca-certificates/1261148.pem (1338 bytes)
I1218 00:20:57.141354 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/files/etc/ssl/certs/12611482.pem --> /usr/share/ca-certificates/12611482.pem (1708 bytes)
I1218 00:20:57.163155 1299733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-1259289/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1218 00:20:57.182372 1299733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1218 00:20:57.196347 1299733 ssh_runner.go:195] Run: openssl version
I1218 00:20:57.203670 1299733 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1261148.pem
I1218 00:20:57.211722 1299733 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1261148.pem /etc/ssl/certs/1261148.pem
I1218 00:20:57.219548 1299733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261148.pem
I1218 00:20:57.223312 1299733 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 18 00:20 /usr/share/ca-certificates/1261148.pem
I1218 00:20:57.223367 1299733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261148.pem
I1218 00:20:57.264981 1299733 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1218 00:20:57.272675 1299733 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1261148.pem /etc/ssl/certs/51391683.0
I1218 00:20:57.280571 1299733 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/12611482.pem
I1218 00:20:57.288189 1299733 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/12611482.pem /etc/ssl/certs/12611482.pem
I1218 00:20:57.296038 1299733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12611482.pem
I1218 00:20:57.299781 1299733 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 18 00:20 /usr/share/ca-certificates/12611482.pem
I1218 00:20:57.299852 1299733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12611482.pem
I1218 00:20:57.341885 1299733 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1218 00:20:57.349662 1299733 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/12611482.pem /etc/ssl/certs/3ec20f2e.0
I1218 00:20:57.357223 1299733 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1218 00:20:57.364781 1299733 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1218 00:20:57.372134 1299733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1218 00:20:57.375869 1299733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 18 00:11 /usr/share/ca-certificates/minikubeCA.pem
I1218 00:20:57.375927 1299733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1218 00:20:57.418297 1299733 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1218 00:20:57.425777 1299733 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1218 00:20:57.433300 1299733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1218 00:20:57.437036 1299733 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1218 00:20:57.437079 1299733 kubeadm.go:401] StartCluster: {Name:functional-232602 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-232602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1218 00:20:57.437145 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1218 00:20:57.437218 1299733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1218 00:20:57.468987 1299733 cri.go:89] found id: ""
I1218 00:20:57.469046 1299733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1218 00:20:57.476736 1299733 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1218 00:20:57.484358 1299733 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1218 00:20:57.484412 1299733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1218 00:20:57.492092 1299733 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1218 00:20:57.492101 1299733 kubeadm.go:158] found existing configuration files:
I1218 00:20:57.492168 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1218 00:20:57.500112 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1218 00:20:57.500171 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1218 00:20:57.507704 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1218 00:20:57.515390 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1218 00:20:57.515445 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1218 00:20:57.523065 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1218 00:20:57.530871 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1218 00:20:57.530940 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1218 00:20:57.538595 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1218 00:20:57.546266 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1218 00:20:57.546329 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1218 00:20:57.553605 1299733 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1218 00:20:57.590567 1299733 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
I1218 00:20:57.590616 1299733 kubeadm.go:319] [preflight] Running pre-flight checks
I1218 00:20:57.668789 1299733 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1218 00:20:57.668854 1299733 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1218 00:20:57.668887 1299733 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1218 00:20:57.668931 1299733 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1218 00:20:57.668978 1299733 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1218 00:20:57.669024 1299733 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1218 00:20:57.669071 1299733 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1218 00:20:57.669117 1299733 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1218 00:20:57.669170 1299733 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1218 00:20:57.669214 1299733 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1218 00:20:57.669260 1299733 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1218 00:20:57.669305 1299733 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1218 00:20:57.734464 1299733 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1218 00:20:57.734568 1299733 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1218 00:20:57.734657 1299733 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1218 00:20:57.741180 1299733 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1218 00:20:57.746689 1299733 out.go:252] - Generating certificates and keys ...
I1218 00:20:57.746815 1299733 kubeadm.go:319] [certs] Using existing ca certificate authority
I1218 00:20:57.746889 1299733 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1218 00:20:57.905686 1299733 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1218 00:20:58.168505 1299733 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1218 00:20:58.293672 1299733 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1218 00:20:58.460852 1299733 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1218 00:20:59.137900 1299733 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1218 00:20:59.138199 1299733 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-232602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1218 00:20:59.499789 1299733 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1218 00:20:59.500152 1299733 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-232602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1218 00:20:59.613582 1299733 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1218 00:20:59.719414 1299733 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1218 00:20:59.785473 1299733 kubeadm.go:319] [certs] Generating "sa" key and public key
I1218 00:20:59.785696 1299733 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1218 00:20:59.991205 1299733 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1218 00:21:00.141545 1299733 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1218 00:21:00.199456 1299733 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1218 00:21:00.547066 1299733 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1218 00:21:00.852695 1299733 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1218 00:21:00.853401 1299733 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1218 00:21:00.858094 1299733 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1218 00:21:00.861857 1299733 out.go:252] - Booting up control plane ...
I1218 00:21:00.861971 1299733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1218 00:21:00.862072 1299733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1218 00:21:00.862713 1299733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1218 00:21:00.890917 1299733 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1218 00:21:00.891042 1299733 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1218 00:21:00.898881 1299733 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1218 00:21:00.899209 1299733 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1218 00:21:00.899267 1299733 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1218 00:21:01.045206 1299733 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1218 00:21:01.045327 1299733 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1218 00:25:01.043114 1299733 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000229243s
I1218 00:25:01.043134 1299733 kubeadm.go:319]
I1218 00:25:01.043539 1299733 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1218 00:25:01.043613 1299733 kubeadm.go:319] - The kubelet is not running
I1218 00:25:01.043800 1299733 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1218 00:25:01.043808 1299733 kubeadm.go:319]
I1218 00:25:01.044153 1299733 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1218 00:25:01.044209 1299733 kubeadm.go:319] - 'systemctl status kubelet'
I1218 00:25:01.044262 1299733 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1218 00:25:01.044266 1299733 kubeadm.go:319]
I1218 00:25:01.049563 1299733 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1218 00:25:01.050002 1299733 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1218 00:25:01.050114 1299733 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1218 00:25:01.050379 1299733 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1218 00:25:01.050385 1299733 kubeadm.go:319]
I1218 00:25:01.050451 1299733 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1218 00:25:01.050583 1299733 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-232602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-232602 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000229243s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1218 00:25:01.050666 1299733 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1218 00:25:01.460466 1299733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1218 00:25:01.474204 1299733 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1218 00:25:01.474262 1299733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1218 00:25:01.482019 1299733 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1218 00:25:01.482028 1299733 kubeadm.go:158] found existing configuration files:
I1218 00:25:01.482090 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1218 00:25:01.489976 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1218 00:25:01.490052 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1218 00:25:01.497758 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1218 00:25:01.505630 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1218 00:25:01.505687 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1218 00:25:01.513023 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1218 00:25:01.521855 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1218 00:25:01.521918 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1218 00:25:01.529855 1299733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1218 00:25:01.537737 1299733 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1218 00:25:01.537801 1299733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1218 00:25:01.545555 1299733 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1218 00:25:01.585938 1299733 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
I1218 00:25:01.585993 1299733 kubeadm.go:319] [preflight] Running pre-flight checks
I1218 00:25:01.663035 1299733 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1218 00:25:01.663106 1299733 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1218 00:25:01.663140 1299733 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1218 00:25:01.663185 1299733 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1218 00:25:01.663231 1299733 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1218 00:25:01.663277 1299733 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1218 00:25:01.663324 1299733 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1218 00:25:01.663371 1299733 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1218 00:25:01.663422 1299733 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1218 00:25:01.663466 1299733 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1218 00:25:01.663513 1299733 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1218 00:25:01.663558 1299733 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1218 00:25:01.731113 1299733 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1218 00:25:01.731244 1299733 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1218 00:25:01.731342 1299733 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1218 00:25:01.741084 1299733 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1218 00:25:01.746702 1299733 out.go:252] - Generating certificates and keys ...
I1218 00:25:01.746801 1299733 kubeadm.go:319] [certs] Using existing ca certificate authority
I1218 00:25:01.746871 1299733 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1218 00:25:01.746952 1299733 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1218 00:25:01.747017 1299733 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1218 00:25:01.747098 1299733 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1218 00:25:01.747157 1299733 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1218 00:25:01.747224 1299733 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1218 00:25:01.747291 1299733 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1218 00:25:01.747370 1299733 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1218 00:25:01.747446 1299733 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1218 00:25:01.747483 1299733 kubeadm.go:319] [certs] Using the existing "sa" key
I1218 00:25:01.747544 1299733 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1218 00:25:01.988776 1299733 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1218 00:25:02.326144 1299733 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1218 00:25:02.628849 1299733 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1218 00:25:02.775195 1299733 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1218 00:25:03.087870 1299733 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1218 00:25:03.088661 1299733 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1218 00:25:03.091595 1299733 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1218 00:25:03.094841 1299733 out.go:252] - Booting up control plane ...
I1218 00:25:03.094942 1299733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1218 00:25:03.095019 1299733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1218 00:25:03.096349 1299733 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1218 00:25:03.117972 1299733 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1218 00:25:03.118083 1299733 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1218 00:25:03.126667 1299733 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1218 00:25:03.127045 1299733 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1218 00:25:03.127267 1299733 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1218 00:25:03.267256 1299733 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1218 00:25:03.267371 1299733 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1218 00:29:03.266822 1299733 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000195767s
I1218 00:29:03.266840 1299733 kubeadm.go:319]
I1218 00:29:03.267205 1299733 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1218 00:29:03.267271 1299733 kubeadm.go:319] - The kubelet is not running
I1218 00:29:03.267461 1299733 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1218 00:29:03.267466 1299733 kubeadm.go:319]
I1218 00:29:03.267879 1299733 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1218 00:29:03.268161 1299733 kubeadm.go:319] - 'systemctl status kubelet'
I1218 00:29:03.268217 1299733 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1218 00:29:03.268222 1299733 kubeadm.go:319]
I1218 00:29:03.272904 1299733 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1218 00:29:03.273321 1299733 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1218 00:29:03.273425 1299733 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1218 00:29:03.273714 1299733 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1218 00:29:03.273726 1299733 kubeadm.go:319]
I1218 00:29:03.273804 1299733 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1218 00:29:03.273873 1299733 kubeadm.go:403] duration metric: took 8m5.836797344s to StartCluster
I1218 00:29:03.273907 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1218 00:29:03.273969 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1218 00:29:03.302314 1299733 cri.go:89] found id: ""
I1218 00:29:03.302336 1299733 logs.go:282] 0 containers: []
W1218 00:29:03.302344 1299733 logs.go:284] No container was found matching "kube-apiserver"
I1218 00:29:03.302349 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1218 00:29:03.302407 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1218 00:29:03.330654 1299733 cri.go:89] found id: ""
I1218 00:29:03.330668 1299733 logs.go:282] 0 containers: []
W1218 00:29:03.330676 1299733 logs.go:284] No container was found matching "etcd"
I1218 00:29:03.330684 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1218 00:29:03.330748 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1218 00:29:03.357980 1299733 cri.go:89] found id: ""
I1218 00:29:03.357994 1299733 logs.go:282] 0 containers: []
W1218 00:29:03.358001 1299733 logs.go:284] No container was found matching "coredns"
I1218 00:29:03.358006 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1218 00:29:03.358064 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1218 00:29:03.386444 1299733 cri.go:89] found id: ""
I1218 00:29:03.386458 1299733 logs.go:282] 0 containers: []
W1218 00:29:03.386465 1299733 logs.go:284] No container was found matching "kube-scheduler"
I1218 00:29:03.386470 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1218 00:29:03.386531 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1218 00:29:03.416098 1299733 cri.go:89] found id: ""
I1218 00:29:03.416123 1299733 logs.go:282] 0 containers: []
W1218 00:29:03.416130 1299733 logs.go:284] No container was found matching "kube-proxy"
I1218 00:29:03.416135 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1218 00:29:03.416208 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1218 00:29:03.449611 1299733 cri.go:89] found id: ""
I1218 00:29:03.449638 1299733 logs.go:282] 0 containers: []
W1218 00:29:03.449645 1299733 logs.go:284] No container was found matching "kube-controller-manager"
I1218 00:29:03.449651 1299733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1218 00:29:03.449719 1299733 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1218 00:29:03.475818 1299733 cri.go:89] found id: ""
I1218 00:29:03.475842 1299733 logs.go:282] 0 containers: []
W1218 00:29:03.475850 1299733 logs.go:284] No container was found matching "kindnet"
I1218 00:29:03.475858 1299733 logs.go:123] Gathering logs for kubelet ...
I1218 00:29:03.475869 1299733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1218 00:29:03.535401 1299733 logs.go:123] Gathering logs for dmesg ...
I1218 00:29:03.535420 1299733 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1218 00:29:03.550585 1299733 logs.go:123] Gathering logs for describe nodes ...
I1218 00:29:03.550604 1299733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1218 00:29:03.618511 1299733 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1218 00:29:03.609577 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1218 00:29:03.610312 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1218 00:29:03.611858 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1218 00:29:03.612439 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1218 00:29:03.614002 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1218 00:29:03.609577 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1218 00:29:03.610312 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1218 00:29:03.611858 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1218 00:29:03.612439 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1218 00:29:03.614002 4756 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1218 00:29:03.618521 1299733 logs.go:123] Gathering logs for containerd ...
I1218 00:29:03.618533 1299733 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1218 00:29:03.656888 1299733 logs.go:123] Gathering logs for container status ...
I1218 00:29:03.656907 1299733 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1218 00:29:03.685692 1299733 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000195767s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1218 00:29:03.685732 1299733 out.go:285] *
W1218 00:29:03.685795 1299733 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000195767s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1218 00:29:03.685849 1299733 out.go:285] *
W1218 00:29:03.687965 1299733 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1218 00:29:03.692973 1299733 out.go:203]
W1218 00:29:03.696844 1299733 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000195767s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1218 00:29:03.696895 1299733 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1218 00:29:03.696914 1299733 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1218 00:29:03.700130 1299733 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918779954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918792951Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918833213Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918850033Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918881548Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918895825Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918905621Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918919004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918935299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.918963081Z" level=info msg="Connect containerd service"
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.919244830Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.919797795Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.939792958Z" level=info msg="Start subscribing containerd event"
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.939884665Z" level=info msg="Start recovering state"
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.940513394Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.940712824Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.977535309Z" level=info msg="Start event monitor"
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.977731998Z" level=info msg="Start cni network conf syncer for default"
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.977826052Z" level=info msg="Start streaming server"
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.977899855Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.977958471Z" level=info msg="runtime interface starting up..."
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.978014208Z" level=info msg="starting plugins..."
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.978082424Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 18 00:20:55 functional-232602 systemd[1]: Started containerd.service - containerd container runtime.
Dec 18 00:20:55 functional-232602 containerd[766]: time="2025-12-18T00:20:55.980333395Z" level=info msg="containerd successfully booted in 0.083017s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1218 00:29:04.691398 4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1218 00:29:04.691947 4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1218 00:29:04.693591 4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1218 00:29:04.694137 4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1218 00:29:04.695897 4875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec17 22:10] kauditd_printk_skb: 8 callbacks suppressed
[Dec18 00:10] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
00:29:04 up 7:11, 0 user, load average: 0.17, 0.47, 0.88
Linux functional-232602 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 18 00:29:01 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 18 00:29:01 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 18 00:29:01 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 18 00:29:01 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 18 00:29:01 functional-232602 kubelet[4677]: E1218 00:29:01.947913 4677 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 18 00:29:01 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 18 00:29:01 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 18 00:29:02 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 18 00:29:02 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 18 00:29:02 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 18 00:29:02 functional-232602 kubelet[4682]: E1218 00:29:02.695044 4682 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 18 00:29:02 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 18 00:29:02 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 18 00:29:03 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 18 00:29:03 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 18 00:29:03 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 18 00:29:03 functional-232602 kubelet[4720]: E1218 00:29:03.455995 4720 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 18 00:29:03 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 18 00:29:03 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 18 00:29:04 functional-232602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 18 00:29:04 functional-232602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 18 00:29:04 functional-232602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 18 00:29:04 functional-232602 kubelet[4789]: E1218 00:29:04.206712 4789 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 18 00:29:04 functional-232602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 18 00:29:04 functional-232602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-232602 -n functional-232602: exit status 6 (361.746793ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1218 00:29:05.177416 1305408 status.go:458] kubeconfig endpoint: get endpoint: "functional-232602" does not appear in /home/jenkins/minikube-integration/22186-1259289/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-232602" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (497.98s)