=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1209 04:18:50.603627 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:21:06.737420 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:21:34.450441 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:38.986675 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:38.993144 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:39.004963 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:39.026903 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:39.069272 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:39.150853 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:39.312492 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:39.634315 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:40.276452 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:41.558073 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:44.119538 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:49.241264 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:22:59.483597 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:23:19.965625 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:24:00.928732 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:25:22.853780 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-717497/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1209 04:26:06.736206 1144231 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/addons-221952/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m19.275785744s)
-- stdout --
* [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22081
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-667319" primary control-plane node in "functional-667319" cluster
* Pulling base image v0.0.48-1765184860-22066 ...
* Found network options:
- HTTP_PROXY=localhost:34739
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:34739 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-667319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-667319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001146574s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00013653s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00013653s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-667319
helpers_test.go:243: (dbg) docker inspect functional-667319:
-- stdout --
[
{
"Id": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
"Created": "2025-12-09T04:18:34.060957311Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1182075,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-09T04:18:34.126944158Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:e4eb91ed18a24161fce60c7cdd660144ecd5b8c5029dc2dea2c5e423c2f48ce4",
"ResolvConfPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hostname",
"HostsPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/hosts",
"LogPath": "/var/lib/docker/containers/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129/e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129-json.log",
"Name": "/functional-667319",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-667319:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-667319",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "e5b6511799c8d5c445a335a3bd5cc9a61b518fc27ac93dad8800da366ef32129",
"LowerDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0-init/diff:/var/lib/docker/overlay2/c44bb57aa59cc265266f37f2bb6e7ec0e7d641c3b4aeaa57e6d23deec6f0d1d4/diff",
"MergedDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/merged",
"UpperDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/diff",
"WorkDir": "/var/lib/docker/overlay2/b0239006282b6e4609a1f554d0a3fb94c749a13505795c8e4078cb2db194e8e0/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-667319",
"Source": "/var/lib/docker/volumes/functional-667319/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-667319",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-667319",
"name.minikube.sigs.k8s.io": "functional-667319",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "7c81dabcd9e57af9bce0bc0f5619f6ef3a27af43f4b649283a5bd778ab256415",
"SandboxKey": "/var/run/docker/netns/7c81dabcd9e5",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33900"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33901"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33904"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33902"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33903"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-667319": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "fe:40:bd:46:56:d8",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "88b3a65de70c15005c532a44219284d4df94e474ca5b78b04514c2f932b03beb",
"EndpointID": "bdef7b156f4a28c1f641ae70b42db2750bb810ae6fe93fd65325e62eb232fe91",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-667319",
"e5b6511799c8"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-667319 -n functional-667319: exit status 6 (336.854827ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1209 04:26:48.781296 1187132 status.go:458] kubeconfig endpoint: get endpoint: "functional-667319" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p functional-667319 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-717497 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ ssh │ functional-717497 ssh sudo cat /etc/ssl/certs/11442312.pem │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image load --daemon kicbase/echo-server:functional-717497 --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ ssh │ functional-717497 ssh sudo cat /usr/share/ca-certificates/11442312.pem │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ ssh │ functional-717497 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image ls │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ ssh │ functional-717497 ssh sudo cat /etc/test/nested/copy/1144231/hosts │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image save kicbase/echo-server:functional-717497 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image rm kicbase/echo-server:functional-717497 --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image ls │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image ls │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ update-context │ functional-717497 update-context --alsologtostderr -v=2 │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image save --daemon kicbase/echo-server:functional-717497 --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ update-context │ functional-717497 update-context --alsologtostderr -v=2 │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ update-context │ functional-717497 update-context --alsologtostderr -v=2 │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image ls --format short --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image ls --format yaml --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ ssh │ functional-717497 ssh pgrep buildkitd │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ │
│ image │ functional-717497 image ls --format json --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image build -t localhost/my-image:functional-717497 testdata/build --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image ls --format table --alsologtostderr │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ image │ functional-717497 image ls │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ delete │ -p functional-717497 │ functional-717497 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ 09 Dec 25 04:18 UTC │
│ start │ -p functional-667319 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-667319 │ jenkins │ v1.37.0 │ 09 Dec 25 04:18 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/09 04:18:29
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1209 04:18:29.204918 1181690 out.go:360] Setting OutFile to fd 1 ...
I1209 04:18:29.205025 1181690 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:18:29.205029 1181690 out.go:374] Setting ErrFile to fd 2...
I1209 04:18:29.205032 1181690 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1209 04:18:29.205273 1181690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22081-1142328/.minikube/bin
I1209 04:18:29.205655 1181690 out.go:368] Setting JSON to false
I1209 04:18:29.206436 1181690 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25233,"bootTime":1765228677,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1209 04:18:29.206487 1181690 start.go:143] virtualization:
I1209 04:18:29.210929 1181690 out.go:179] * [functional-667319] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1209 04:18:29.214568 1181690 out.go:179] - MINIKUBE_LOCATION=22081
I1209 04:18:29.214666 1181690 notify.go:221] Checking for updates...
I1209 04:18:29.221130 1181690 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1209 04:18:29.224257 1181690 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22081-1142328/kubeconfig
I1209 04:18:29.227482 1181690 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22081-1142328/.minikube
I1209 04:18:29.230682 1181690 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1209 04:18:29.233815 1181690 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1209 04:18:29.237216 1181690 driver.go:422] Setting default libvirt URI to qemu:///system
I1209 04:18:29.265797 1181690 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1209 04:18:29.265929 1181690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 04:18:29.319216 1181690 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:18:29.310329495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1209 04:18:29.319299 1181690 docker.go:319] overlay module found
I1209 04:18:29.322534 1181690 out.go:179] * Using the docker driver based on user configuration
I1209 04:18:29.325484 1181690 start.go:309] selected driver: docker
I1209 04:18:29.325493 1181690 start.go:927] validating driver "docker" against <nil>
I1209 04:18:29.325504 1181690 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1209 04:18:29.326243 1181690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 04:18:29.381124 1181690 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-09 04:18:29.372526494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1209 04:18:29.381280 1181690 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1209 04:18:29.381488 1181690 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1209 04:18:29.384407 1181690 out.go:179] * Using Docker driver with root privileges
I1209 04:18:29.387349 1181690 cni.go:84] Creating CNI manager for ""
I1209 04:18:29.387409 1181690 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1209 04:18:29.387416 1181690 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1209 04:18:29.387526 1181690 start.go:353] cluster config:
{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 04:18:29.392539 1181690 out.go:179] * Starting "functional-667319" primary control-plane node in "functional-667319" cluster
I1209 04:18:29.395408 1181690 cache.go:134] Beginning downloading kic base image for docker with containerd
I1209 04:18:29.398300 1181690 out.go:179] * Pulling base image v0.0.48-1765184860-22066 ...
I1209 04:18:29.401199 1181690 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon
I1209 04:18:29.401296 1181690 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1209 04:18:29.401315 1181690 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
I1209 04:18:29.401323 1181690 cache.go:65] Caching tarball of preloaded images
I1209 04:18:29.401420 1181690 preload.go:238] Found /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1209 04:18:29.401429 1181690 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
I1209 04:18:29.401767 1181690 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
I1209 04:18:29.401784 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json: {Name:mk573ebc352f76a50b397be0f1c5137667ba678e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 04:18:29.421163 1181690 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c in local docker daemon, skipping pull
I1209 04:18:29.421180 1181690 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c exists in daemon, skipping load
I1209 04:18:29.421192 1181690 cache.go:243] Successfully downloaded all kic artifacts
I1209 04:18:29.421226 1181690 start.go:360] acquireMachinesLock for functional-667319: {Name:mk6c31f0747796f5f8ac8ea1653d6ee60fe2a47d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 04:18:29.421344 1181690 start.go:364] duration metric: took 104.333µs to acquireMachinesLock for "functional-667319"
I1209 04:18:29.421366 1181690 start.go:93] Provisioning new machine with config: &{Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1209 04:18:29.421428 1181690 start.go:125] createHost starting for "" (driver="docker")
I1209 04:18:29.424719 1181690 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1209 04:18:29.425002 1181690 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:34739 to docker env.
I1209 04:18:29.425028 1181690 start.go:159] libmachine.API.Create for "functional-667319" (driver="docker")
I1209 04:18:29.425064 1181690 client.go:173] LocalClient.Create starting
I1209 04:18:29.425120 1181690 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem
I1209 04:18:29.425151 1181690 main.go:143] libmachine: Decoding PEM data...
I1209 04:18:29.425169 1181690 main.go:143] libmachine: Parsing certificate...
I1209 04:18:29.425225 1181690 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem
I1209 04:18:29.425264 1181690 main.go:143] libmachine: Decoding PEM data...
I1209 04:18:29.425275 1181690 main.go:143] libmachine: Parsing certificate...
I1209 04:18:29.425631 1181690 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1209 04:18:29.441023 1181690 cli_runner.go:211] docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1209 04:18:29.441094 1181690 network_create.go:284] running [docker network inspect functional-667319] to gather additional debugging logs...
I1209 04:18:29.441110 1181690 cli_runner.go:164] Run: docker network inspect functional-667319
W1209 04:18:29.457025 1181690 cli_runner.go:211] docker network inspect functional-667319 returned with exit code 1
I1209 04:18:29.457053 1181690 network_create.go:287] error running [docker network inspect functional-667319]: docker network inspect functional-667319: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-667319 not found
I1209 04:18:29.457065 1181690 network_create.go:289] output of [docker network inspect functional-667319]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-667319 not found
** /stderr **
I1209 04:18:29.457182 1181690 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 04:18:29.474026 1181690 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400186f5d0}
I1209 04:18:29.474058 1181690 network_create.go:124] attempt to create docker network functional-667319 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1209 04:18:29.474113 1181690 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-667319 functional-667319
I1209 04:18:29.526052 1181690 network_create.go:108] docker network functional-667319 192.168.49.0/24 created
I1209 04:18:29.526074 1181690 kic.go:121] calculated static IP "192.168.49.2" for the "functional-667319" container
I1209 04:18:29.526144 1181690 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1209 04:18:29.540730 1181690 cli_runner.go:164] Run: docker volume create functional-667319 --label name.minikube.sigs.k8s.io=functional-667319 --label created_by.minikube.sigs.k8s.io=true
I1209 04:18:29.558452 1181690 oci.go:103] Successfully created a docker volume functional-667319
I1209 04:18:29.558532 1181690 cli_runner.go:164] Run: docker run --rm --name functional-667319-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-667319 --entrypoint /usr/bin/test -v functional-667319:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -d /var/lib
I1209 04:18:30.124549 1181690 oci.go:107] Successfully prepared a docker volume functional-667319
I1209 04:18:30.124612 1181690 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1209 04:18:30.124621 1181690 kic.go:194] Starting extracting preloaded images to volume ...
I1209 04:18:30.124697 1181690 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-667319:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir
I1209 04:18:33.987163 1181690 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22081-1142328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-667319:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c -I lz4 -xf /preloaded.tar -C /extractDir: (3.862430131s)
I1209 04:18:33.987183 1181690 kic.go:203] duration metric: took 3.862559841s to extract preloaded images to volume ...
W1209 04:18:33.987328 1181690 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1209 04:18:33.987422 1181690 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1209 04:18:34.045739 1181690 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-667319 --name functional-667319 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-667319 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-667319 --network functional-667319 --ip 192.168.49.2 --volume functional-667319:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c
I1209 04:18:34.328371 1181690 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Running}}
I1209 04:18:34.356856 1181690 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
I1209 04:18:34.385701 1181690 cli_runner.go:164] Run: docker exec functional-667319 stat /var/lib/dpkg/alternatives/iptables
I1209 04:18:34.448336 1181690 oci.go:144] the created container "functional-667319" has a running status.
I1209 04:18:34.448356 1181690 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa...
I1209 04:18:34.590892 1181690 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1209 04:18:34.616547 1181690 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
I1209 04:18:34.645902 1181690 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1209 04:18:34.645913 1181690 kic_runner.go:114] Args: [docker exec --privileged functional-667319 chown docker:docker /home/docker/.ssh/authorized_keys]
I1209 04:18:34.712746 1181690 cli_runner.go:164] Run: docker container inspect functional-667319 --format={{.State.Status}}
I1209 04:18:34.739745 1181690 machine.go:94] provisionDockerMachine start ...
I1209 04:18:34.739833 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:18:34.766446 1181690 main.go:143] libmachine: Using SSH client type: native
I1209 04:18:34.766795 1181690 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil> [] 0s} 127.0.0.1 33900 <nil> <nil>}
I1209 04:18:34.766802 1181690 main.go:143] libmachine: About to run SSH command:
hostname
I1209 04:18:34.767473 1181690 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38220->127.0.0.1:33900: read: connection reset by peer
I1209 04:18:37.919716 1181690 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
I1209 04:18:37.919734 1181690 ubuntu.go:182] provisioning hostname "functional-667319"
I1209 04:18:37.919807 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:18:37.937658 1181690 main.go:143] libmachine: Using SSH client type: native
I1209 04:18:37.937961 1181690 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil> [] 0s} 127.0.0.1 33900 <nil> <nil>}
I1209 04:18:37.937969 1181690 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-667319 && echo "functional-667319" | sudo tee /etc/hostname
I1209 04:18:38.105800 1181690 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-667319
I1209 04:18:38.105874 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:18:38.124605 1181690 main.go:143] libmachine: Using SSH client type: native
I1209 04:18:38.124916 1181690 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db140] 0x3dd640 <nil> [] 0s} 127.0.0.1 33900 <nil> <nil>}
I1209 04:18:38.124929 1181690 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-667319' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-667319/g' /etc/hosts;
else
echo '127.0.1.1 functional-667319' | sudo tee -a /etc/hosts;
fi
fi
I1209 04:18:38.276300 1181690 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1209 04:18:38.276316 1181690 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22081-1142328/.minikube CaCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22081-1142328/.minikube}
I1209 04:18:38.276339 1181690 ubuntu.go:190] setting up certificates
I1209 04:18:38.276347 1181690 provision.go:84] configureAuth start
I1209 04:18:38.276407 1181690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
I1209 04:18:38.298940 1181690 provision.go:143] copyHostCerts
I1209 04:18:38.298994 1181690 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem, removing ...
I1209 04:18:38.299001 1181690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem
I1209 04:18:38.299076 1181690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.pem (1078 bytes)
I1209 04:18:38.299224 1181690 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem, removing ...
I1209 04:18:38.299229 1181690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem
I1209 04:18:38.299257 1181690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/cert.pem (1123 bytes)
I1209 04:18:38.299307 1181690 exec_runner.go:144] found /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem, removing ...
I1209 04:18:38.299310 1181690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem
I1209 04:18:38.299333 1181690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22081-1142328/.minikube/key.pem (1675 bytes)
I1209 04:18:38.299376 1181690 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem org=jenkins.functional-667319 san=[127.0.0.1 192.168.49.2 functional-667319 localhost minikube]
I1209 04:18:38.353979 1181690 provision.go:177] copyRemoteCerts
I1209 04:18:38.354038 1181690 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1209 04:18:38.354078 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:18:38.371581 1181690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
I1209 04:18:38.475497 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1209 04:18:38.492003 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1209 04:18:38.509314 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1209 04:18:38.526298 1181690 provision.go:87] duration metric: took 249.929309ms to configureAuth
I1209 04:18:38.526329 1181690 ubuntu.go:206] setting minikube options for container-runtime
I1209 04:18:38.526505 1181690 config.go:182] Loaded profile config "functional-667319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1209 04:18:38.526511 1181690 machine.go:97] duration metric: took 3.786756193s to provisionDockerMachine
I1209 04:18:38.526516 1181690 client.go:176] duration metric: took 9.101448027s to LocalClient.Create
I1209 04:18:38.526529 1181690 start.go:167] duration metric: took 9.101501868s to libmachine.API.Create "functional-667319"
I1209 04:18:38.526535 1181690 start.go:293] postStartSetup for "functional-667319" (driver="docker")
I1209 04:18:38.526544 1181690 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1209 04:18:38.526598 1181690 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1209 04:18:38.526633 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:18:38.547176 1181690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
I1209 04:18:38.651954 1181690 ssh_runner.go:195] Run: cat /etc/os-release
I1209 04:18:38.655038 1181690 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1209 04:18:38.655054 1181690 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1209 04:18:38.655065 1181690 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/addons for local assets ...
I1209 04:18:38.655118 1181690 filesync.go:126] Scanning /home/jenkins/minikube-integration/22081-1142328/.minikube/files for local assets ...
I1209 04:18:38.655206 1181690 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem -> 11442312.pem in /etc/ssl/certs
I1209 04:18:38.655285 1181690 filesync.go:149] local asset: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts -> hosts in /etc/test/nested/copy/1144231
I1209 04:18:38.655328 1181690 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1144231
I1209 04:18:38.662615 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /etc/ssl/certs/11442312.pem (1708 bytes)
I1209 04:18:38.678880 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/test/nested/copy/1144231/hosts --> /etc/test/nested/copy/1144231/hosts (40 bytes)
I1209 04:18:38.696568 1181690 start.go:296] duration metric: took 170.019026ms for postStartSetup
I1209 04:18:38.696969 1181690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
I1209 04:18:38.712904 1181690 profile.go:143] Saving config to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/config.json ...
I1209 04:18:38.713170 1181690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1209 04:18:38.713212 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:18:38.729199 1181690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
I1209 04:18:38.832870 1181690 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1209 04:18:38.837512 1181690 start.go:128] duration metric: took 9.416071818s to createHost
I1209 04:18:38.837527 1181690 start.go:83] releasing machines lock for "functional-667319", held for 9.416176431s
I1209 04:18:38.837596 1181690 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-667319
I1209 04:18:38.858538 1181690 out.go:179] * Found network options:
I1209 04:18:38.861514 1181690 out.go:179] - HTTP_PROXY=localhost:34739
W1209 04:18:38.864664 1181690 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1209 04:18:38.867745 1181690 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1209 04:18:38.870715 1181690 ssh_runner.go:195] Run: cat /version.json
I1209 04:18:38.870774 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:18:38.870814 1181690 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1209 04:18:38.870880 1181690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-667319
I1209 04:18:38.890006 1181690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
I1209 04:18:38.890448 1181690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/22081-1142328/.minikube/machines/functional-667319/id_rsa Username:docker}
I1209 04:18:38.992431 1181690 ssh_runner.go:195] Run: systemctl --version
I1209 04:18:39.078214 1181690 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1209 04:18:39.082636 1181690 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1209 04:18:39.082725 1181690 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1209 04:18:39.109371 1181690 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1209 04:18:39.109385 1181690 start.go:496] detecting cgroup driver to use...
I1209 04:18:39.109415 1181690 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1209 04:18:39.109476 1181690 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1209 04:18:39.124209 1181690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1209 04:18:39.137718 1181690 docker.go:218] disabling cri-docker service (if available) ...
I1209 04:18:39.137785 1181690 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1209 04:18:39.157200 1181690 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1209 04:18:39.175756 1181690 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1209 04:18:39.293090 1181690 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1209 04:18:39.415035 1181690 docker.go:234] disabling docker service ...
I1209 04:18:39.415091 1181690 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1209 04:18:39.436000 1181690 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1209 04:18:39.450194 1181690 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1209 04:18:39.569102 1181690 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1209 04:18:39.685732 1181690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1209 04:18:39.698755 1181690 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1209 04:18:39.713555 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1209 04:18:39.722194 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1209 04:18:39.731110 1181690 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1209 04:18:39.731171 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1209 04:18:39.740099 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1209 04:18:39.748613 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1209 04:18:39.757268 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1209 04:18:39.765996 1181690 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1209 04:18:39.773811 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1209 04:18:39.782322 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1209 04:18:39.790681 1181690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1209 04:18:39.798963 1181690 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1209 04:18:39.806177 1181690 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1209 04:18:39.813312 1181690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 04:18:39.922820 1181690 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1209 04:18:40.082191 1181690 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1209 04:18:40.082285 1181690 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1209 04:18:40.087257 1181690 start.go:564] Will wait 60s for crictl version
I1209 04:18:40.087330 1181690 ssh_runner.go:195] Run: which crictl
I1209 04:18:40.091598 1181690 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1209 04:18:40.124551 1181690 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1209 04:18:40.124640 1181690 ssh_runner.go:195] Run: containerd --version
I1209 04:18:40.147833 1181690 ssh_runner.go:195] Run: containerd --version
I1209 04:18:40.173200 1181690 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
I1209 04:18:40.176165 1181690 cli_runner.go:164] Run: docker network inspect functional-667319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 04:18:40.193118 1181690 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1209 04:18:40.197226 1181690 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 04:18:40.207276 1181690 kubeadm.go:884] updating cluster {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1209 04:18:40.207380 1181690 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1209 04:18:40.207446 1181690 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 04:18:40.234910 1181690 containerd.go:627] all images are preloaded for containerd runtime.
I1209 04:18:40.234922 1181690 containerd.go:534] Images already preloaded, skipping extraction
I1209 04:18:40.234983 1181690 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 04:18:40.259599 1181690 containerd.go:627] all images are preloaded for containerd runtime.
I1209 04:18:40.259611 1181690 cache_images.go:86] Images are preloaded, skipping loading
I1209 04:18:40.259617 1181690 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1209 04:18:40.259711 1181690 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-667319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1209 04:18:40.259776 1181690 ssh_runner.go:195] Run: sudo crictl info
I1209 04:18:40.284519 1181690 cni.go:84] Creating CNI manager for ""
I1209 04:18:40.284529 1181690 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1209 04:18:40.284548 1181690 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1209 04:18:40.284572 1181690 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-667319 NodeName:functional-667319 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1209 04:18:40.284687 1181690 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-667319"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1209 04:18:40.284754 1181690 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1209 04:18:40.292756 1181690 binaries.go:51] Found k8s binaries, skipping transfer
I1209 04:18:40.292818 1181690 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1209 04:18:40.300556 1181690 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1209 04:18:40.313475 1181690 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1209 04:18:40.325992 1181690 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1209 04:18:40.338439 1181690 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1209 04:18:40.341964 1181690 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 04:18:40.351214 1181690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 04:18:40.465426 1181690 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1209 04:18:40.481455 1181690 certs.go:69] Setting up /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319 for IP: 192.168.49.2
I1209 04:18:40.481466 1181690 certs.go:195] generating shared ca certs ...
I1209 04:18:40.481480 1181690 certs.go:227] acquiring lock for ca certs: {Name:mk15788702f8c4e23b5aeab3f44961d296fab259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 04:18:40.481621 1181690 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key
I1209 04:18:40.481670 1181690 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key
I1209 04:18:40.481676 1181690 certs.go:257] generating profile certs ...
I1209 04:18:40.481731 1181690 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key
I1209 04:18:40.481741 1181690 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt with IP's: []
I1209 04:18:41.080936 1181690 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt ...
I1209 04:18:41.080952 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.crt: {Name:mkf3bdb384a02d9ddee4d4fb76ce831c03b056f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 04:18:41.081154 1181690 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key ...
I1209 04:18:41.081160 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/client.key: {Name:mk720c9ccfeaa8a2cd0ee2bda926880388858ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 04:18:41.081261 1181690 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595
I1209 04:18:41.081272 1181690 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt.c80eb595 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1209 04:18:41.180008 1181690 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt.c80eb595 ...
I1209 04:18:41.180027 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt.c80eb595: {Name:mka2e19c54b03f20264fe636ee16e2a33aeb03cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 04:18:41.180179 1181690 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595 ...
I1209 04:18:41.180186 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595: {Name:mkd1a25832b55ee9080d9aec7535ff72db49438e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 04:18:41.180265 1181690 certs.go:382] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt.c80eb595 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt
I1209 04:18:41.180336 1181690 certs.go:386] copying /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key.c80eb595 -> /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key
I1209 04:18:41.180389 1181690 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key
I1209 04:18:41.180400 1181690 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt with IP's: []
I1209 04:18:41.444871 1181690 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt ...
I1209 04:18:41.444887 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt: {Name:mk1009b33da5d5106bac6bded991980213b8309e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 04:18:41.445080 1181690 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key ...
I1209 04:18:41.445089 1181690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key: {Name:mk15adbf16c2803947edf00d6e11a4e92bbad30f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 04:18:41.445281 1181690 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem (1338 bytes)
W1209 04:18:41.445326 1181690 certs.go:480] ignoring /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231_empty.pem, impossibly tiny 0 bytes
I1209 04:18:41.445335 1181690 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca-key.pem (1679 bytes)
I1209 04:18:41.445361 1181690 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/ca.pem (1078 bytes)
I1209 04:18:41.445387 1181690 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/cert.pem (1123 bytes)
I1209 04:18:41.445409 1181690 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/key.pem (1675 bytes)
I1209 04:18:41.445451 1181690 certs.go:484] found cert: /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem (1708 bytes)
I1209 04:18:41.446018 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1209 04:18:41.464483 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1209 04:18:41.485635 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1209 04:18:41.505616 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1209 04:18:41.527249 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1209 04:18:41.545105 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1209 04:18:41.562186 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1209 04:18:41.579876 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/profiles/functional-667319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1209 04:18:41.596558 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/certs/1144231.pem --> /usr/share/ca-certificates/1144231.pem (1338 bytes)
I1209 04:18:41.613276 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/files/etc/ssl/certs/11442312.pem --> /usr/share/ca-certificates/11442312.pem (1708 bytes)
I1209 04:18:41.629730 1181690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22081-1142328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1209 04:18:41.646953 1181690 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1209 04:18:41.659251 1181690 ssh_runner.go:195] Run: openssl version
I1209 04:18:41.665347 1181690 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/11442312.pem
I1209 04:18:41.672518 1181690 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/11442312.pem /etc/ssl/certs/11442312.pem
I1209 04:18:41.679415 1181690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11442312.pem
I1209 04:18:41.683303 1181690 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 9 04:18 /usr/share/ca-certificates/11442312.pem
I1209 04:18:41.683360 1181690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11442312.pem
I1209 04:18:41.732527 1181690 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1209 04:18:41.741038 1181690 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/11442312.pem /etc/ssl/certs/3ec20f2e.0
I1209 04:18:41.748566 1181690 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1209 04:18:41.757170 1181690 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1209 04:18:41.764798 1181690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1209 04:18:41.768796 1181690 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 9 04:09 /usr/share/ca-certificates/minikubeCA.pem
I1209 04:18:41.768855 1181690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1209 04:18:41.811587 1181690 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1209 04:18:41.819063 1181690 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1209 04:18:41.826649 1181690 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1144231.pem
I1209 04:18:41.833938 1181690 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1144231.pem /etc/ssl/certs/1144231.pem
I1209 04:18:41.841442 1181690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1144231.pem
I1209 04:18:41.845251 1181690 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 9 04:18 /usr/share/ca-certificates/1144231.pem
I1209 04:18:41.845317 1181690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1144231.pem
I1209 04:18:41.886121 1181690 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1209 04:18:41.893802 1181690 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1144231.pem /etc/ssl/certs/51391683.0
I1209 04:18:41.902105 1181690 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1209 04:18:41.905774 1181690 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1209 04:18:41.905819 1181690 kubeadm.go:401] StartCluster: {Name:functional-667319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765184860-22066@sha256:0e5cf9b676e5819ee8c93795a046ddcf50a7379e782f38a8563fb7f49d6fca0c Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-667319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 04:18:41.905900 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1209 04:18:41.905956 1181690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1209 04:18:41.935719 1181690 cri.go:89] found id: ""
I1209 04:18:41.935783 1181690 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1209 04:18:41.944904 1181690 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1209 04:18:41.952908 1181690 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1209 04:18:41.952961 1181690 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1209 04:18:41.962481 1181690 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1209 04:18:41.962497 1181690 kubeadm.go:158] found existing configuration files:
I1209 04:18:41.962550 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1209 04:18:41.970416 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1209 04:18:41.970469 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1209 04:18:41.978995 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1209 04:18:41.991899 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1209 04:18:41.991955 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1209 04:18:42.003475 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1209 04:18:42.017417 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1209 04:18:42.017499 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1209 04:18:42.027621 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1209 04:18:42.036501 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1209 04:18:42.036569 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1209 04:18:42.044636 1181690 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1209 04:18:42.092809 1181690 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1209 04:18:42.093759 1181690 kubeadm.go:319] [preflight] Running pre-flight checks
I1209 04:18:42.245597 1181690 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1209 04:18:42.245662 1181690 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1209 04:18:42.245697 1181690 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1209 04:18:42.245742 1181690 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1209 04:18:42.245789 1181690 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1209 04:18:42.245835 1181690 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1209 04:18:42.245882 1181690 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1209 04:18:42.245929 1181690 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1209 04:18:42.245993 1181690 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1209 04:18:42.246038 1181690 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1209 04:18:42.246084 1181690 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1209 04:18:42.246129 1181690 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1209 04:18:42.333955 1181690 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1209 04:18:42.334060 1181690 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1209 04:18:42.334150 1181690 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1209 04:18:42.343117 1181690 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1209 04:18:42.349439 1181690 out.go:252] - Generating certificates and keys ...
I1209 04:18:42.349552 1181690 kubeadm.go:319] [certs] Using existing ca certificate authority
I1209 04:18:42.349632 1181690 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1209 04:18:42.400810 1181690 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1209 04:18:42.535382 1181690 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1209 04:18:42.634496 1181690 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1209 04:18:42.807234 1181690 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1209 04:18:42.923275 1181690 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1209 04:18:42.923574 1181690 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-667319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1209 04:18:43.417944 1181690 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1209 04:18:43.418318 1181690 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-667319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1209 04:18:43.745058 1181690 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1209 04:18:44.186210 1181690 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1209 04:18:44.275344 1181690 kubeadm.go:319] [certs] Generating "sa" key and public key
I1209 04:18:44.275578 1181690 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1209 04:18:44.711585 1181690 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1209 04:18:45.081990 1181690 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1209 04:18:45.294992 1181690 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1209 04:18:45.570864 1181690 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1209 04:18:45.710745 1181690 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1209 04:18:45.711703 1181690 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1209 04:18:45.714623 1181690 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1209 04:18:45.718165 1181690 out.go:252] - Booting up control plane ...
I1209 04:18:45.718259 1181690 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1209 04:18:45.718333 1181690 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1209 04:18:45.719287 1181690 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1209 04:18:45.739545 1181690 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1209 04:18:45.739646 1181690 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1209 04:18:45.746984 1181690 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1209 04:18:45.748352 1181690 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1209 04:18:45.748397 1181690 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1209 04:18:45.881377 1181690 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1209 04:18:45.881488 1181690 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1209 04:22:45.882359 1181690 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001146574s
I1209 04:22:45.882383 1181690 kubeadm.go:319]
I1209 04:22:45.882446 1181690 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1209 04:22:45.882481 1181690 kubeadm.go:319] - The kubelet is not running
I1209 04:22:45.882594 1181690 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1209 04:22:45.882601 1181690 kubeadm.go:319]
I1209 04:22:45.882713 1181690 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1209 04:22:45.882747 1181690 kubeadm.go:319] - 'systemctl status kubelet'
I1209 04:22:45.882781 1181690 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1209 04:22:45.882784 1181690 kubeadm.go:319]
I1209 04:22:45.887607 1181690 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1209 04:22:45.888097 1181690 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1209 04:22:45.888214 1181690 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1209 04:22:45.888453 1181690 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1209 04:22:45.888457 1181690 kubeadm.go:319]
W1209 04:22:45.888681 1181690 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-667319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-667319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001146574s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1209 04:22:45.888778 1181690 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1209 04:22:45.889053 1181690 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1209 04:22:46.296600 1181690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1209 04:22:46.310736 1181690 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1209 04:22:46.310790 1181690 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1209 04:22:46.318580 1181690 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1209 04:22:46.318591 1181690 kubeadm.go:158] found existing configuration files:
I1209 04:22:46.318644 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1209 04:22:46.326307 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1209 04:22:46.326369 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1209 04:22:46.333761 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1209 04:22:46.341698 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1209 04:22:46.341760 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1209 04:22:46.349579 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1209 04:22:46.357345 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1209 04:22:46.357404 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1209 04:22:46.364679 1181690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1209 04:22:46.372004 1181690 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1209 04:22:46.372138 1181690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1209 04:22:46.379457 1181690 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1209 04:22:46.418901 1181690 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1209 04:22:46.418949 1181690 kubeadm.go:319] [preflight] Running pre-flight checks
I1209 04:22:46.487909 1181690 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1209 04:22:46.487973 1181690 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1209 04:22:46.488007 1181690 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1209 04:22:46.488071 1181690 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1209 04:22:46.488119 1181690 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1209 04:22:46.488164 1181690 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1209 04:22:46.488210 1181690 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1209 04:22:46.488258 1181690 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1209 04:22:46.488304 1181690 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1209 04:22:46.488347 1181690 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1209 04:22:46.488394 1181690 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1209 04:22:46.488439 1181690 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1209 04:22:46.554157 1181690 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1209 04:22:46.554260 1181690 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1209 04:22:46.554349 1181690 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1209 04:22:46.560519 1181690 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1209 04:22:46.565795 1181690 out.go:252] - Generating certificates and keys ...
I1209 04:22:46.565892 1181690 kubeadm.go:319] [certs] Using existing ca certificate authority
I1209 04:22:46.565970 1181690 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1209 04:22:46.566099 1181690 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1209 04:22:46.566188 1181690 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1209 04:22:46.566269 1181690 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1209 04:22:46.566326 1181690 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1209 04:22:46.566407 1181690 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1209 04:22:46.566472 1181690 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1209 04:22:46.566551 1181690 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1209 04:22:46.566665 1181690 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1209 04:22:46.566731 1181690 kubeadm.go:319] [certs] Using the existing "sa" key
I1209 04:22:46.566860 1181690 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1209 04:22:47.111495 1181690 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1209 04:22:47.418844 1181690 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1209 04:22:47.540983 1181690 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1209 04:22:47.683464 1181690 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1209 04:22:47.836599 1181690 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1209 04:22:47.837234 1181690 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1209 04:22:47.839956 1181690 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1209 04:22:47.843279 1181690 out.go:252] - Booting up control plane ...
I1209 04:22:47.843384 1181690 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1209 04:22:47.843468 1181690 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1209 04:22:47.843539 1181690 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1209 04:22:47.863716 1181690 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1209 04:22:47.863849 1181690 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1209 04:22:47.872846 1181690 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1209 04:22:47.873555 1181690 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1209 04:22:47.873793 1181690 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1209 04:22:48.010450 1181690 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1209 04:22:48.010558 1181690 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1209 04:26:48.010112 1181690 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00013653s
I1209 04:26:48.010132 1181690 kubeadm.go:319]
I1209 04:26:48.010204 1181690 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1209 04:26:48.010237 1181690 kubeadm.go:319] - The kubelet is not running
I1209 04:26:48.010382 1181690 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1209 04:26:48.010385 1181690 kubeadm.go:319]
I1209 04:26:48.010510 1181690 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1209 04:26:48.010552 1181690 kubeadm.go:319] - 'systemctl status kubelet'
I1209 04:26:48.010611 1181690 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1209 04:26:48.010617 1181690 kubeadm.go:319]
I1209 04:26:48.016867 1181690 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1209 04:26:48.017419 1181690 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1209 04:26:48.017550 1181690 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1209 04:26:48.017804 1181690 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1209 04:26:48.017808 1181690 kubeadm.go:319]
I1209 04:26:48.017926 1181690 kubeadm.go:403] duration metric: took 8m6.112110258s to StartCluster
I1209 04:26:48.017931 1181690 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1209 04:26:48.017974 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1209 04:26:48.018040 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1209 04:26:48.045879 1181690 cri.go:89] found id: ""
I1209 04:26:48.045893 1181690 logs.go:282] 0 containers: []
W1209 04:26:48.045900 1181690 logs.go:284] No container was found matching "kube-apiserver"
I1209 04:26:48.045905 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1209 04:26:48.045969 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1209 04:26:48.070505 1181690 cri.go:89] found id: ""
I1209 04:26:48.070519 1181690 logs.go:282] 0 containers: []
W1209 04:26:48.070526 1181690 logs.go:284] No container was found matching "etcd"
I1209 04:26:48.070531 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1209 04:26:48.070591 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1209 04:26:48.096031 1181690 cri.go:89] found id: ""
I1209 04:26:48.096061 1181690 logs.go:282] 0 containers: []
W1209 04:26:48.096068 1181690 logs.go:284] No container was found matching "coredns"
I1209 04:26:48.096074 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1209 04:26:48.096162 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1209 04:26:48.119513 1181690 cri.go:89] found id: ""
I1209 04:26:48.119527 1181690 logs.go:282] 0 containers: []
W1209 04:26:48.119534 1181690 logs.go:284] No container was found matching "kube-scheduler"
I1209 04:26:48.119539 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1209 04:26:48.119599 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1209 04:26:48.144231 1181690 cri.go:89] found id: ""
I1209 04:26:48.144245 1181690 logs.go:282] 0 containers: []
W1209 04:26:48.144252 1181690 logs.go:284] No container was found matching "kube-proxy"
I1209 04:26:48.144257 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1209 04:26:48.144317 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1209 04:26:48.168267 1181690 cri.go:89] found id: ""
I1209 04:26:48.168281 1181690 logs.go:282] 0 containers: []
W1209 04:26:48.168287 1181690 logs.go:284] No container was found matching "kube-controller-manager"
I1209 04:26:48.168293 1181690 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1209 04:26:48.168353 1181690 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1209 04:26:48.196948 1181690 cri.go:89] found id: ""
I1209 04:26:48.196961 1181690 logs.go:282] 0 containers: []
W1209 04:26:48.196967 1181690 logs.go:284] No container was found matching "kindnet"
I1209 04:26:48.196976 1181690 logs.go:123] Gathering logs for container status ...
I1209 04:26:48.196986 1181690 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1209 04:26:48.226160 1181690 logs.go:123] Gathering logs for kubelet ...
I1209 04:26:48.226176 1181690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1209 04:26:48.282554 1181690 logs.go:123] Gathering logs for dmesg ...
I1209 04:26:48.282573 1181690 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1209 04:26:48.299510 1181690 logs.go:123] Gathering logs for describe nodes ...
I1209 04:26:48.299527 1181690 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1209 04:26:48.367088 1181690 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1209 04:26:48.358889 4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1209 04:26:48.359604 4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1209 04:26:48.361171 4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1209 04:26:48.361715 4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1209 04:26:48.363317 4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1209 04:26:48.358889 4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1209 04:26:48.359604 4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1209 04:26:48.361171 4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1209 04:26:48.361715 4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1209 04:26:48.363317 4752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1209 04:26:48.367098 1181690 logs.go:123] Gathering logs for containerd ...
I1209 04:26:48.367109 1181690 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
W1209 04:26:48.405107 1181690 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00013653s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1209 04:26:48.405148 1181690 out.go:285] *
W1209 04:26:48.405832 1181690 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00013653s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1209 04:26:48.405980 1181690 out.go:285] *
W1209 04:26:48.408735 1181690 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 04:26:48.416479 1181690 out.go:203]
W1209 04:26:48.420159 1181690 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00013653s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1209 04:26:48.420196 1181690 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1209 04:26:48.420216 1181690 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1209 04:26:48.423364 1181690 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999474819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999490547Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999529192Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999546644Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999557294Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999568174Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999576813Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999588218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999606868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999644224Z" level=info msg="Connect containerd service"
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:39.999997616Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.000776464Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.028738895Z" level=info msg="Start subscribing containerd event"
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.028845468Z" level=info msg="Start recovering state"
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.029071282Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.029197145Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.077421160Z" level=info msg="Start event monitor"
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.077708560Z" level=info msg="Start cni network conf syncer for default"
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.077817956Z" level=info msg="Start streaming server"
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.077918597Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.078152526Z" level=info msg="runtime interface starting up..."
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.078256031Z" level=info msg="starting plugins..."
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.078333099Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 09 04:18:40 functional-667319 systemd[1]: Started containerd.service - containerd container runtime.
Dec 09 04:18:40 functional-667319 containerd[759]: time="2025-12-09T04:18:40.081024715Z" level=info msg="containerd successfully booted in 0.104352s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1209 04:26:49.398510 4857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1209 04:26:49.398937 4857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1209 04:26:49.400525 4857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1209 04:26:49.401003 4857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1209 04:26:49.402582 4857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec 9 03:13] overlayfs: idmapped layers are currently not supported
[ +25.904254] overlayfs: idmapped layers are currently not supported
[Dec 9 03:14] overlayfs: idmapped layers are currently not supported
[Dec 9 03:16] overlayfs: idmapped layers are currently not supported
[Dec 9 03:18] overlayfs: idmapped layers are currently not supported
[Dec 9 03:19] overlayfs: idmapped layers are currently not supported
[Dec 9 03:30] overlayfs: idmapped layers are currently not supported
[Dec 9 03:32] overlayfs: idmapped layers are currently not supported
[ +28.114653] overlayfs: idmapped layers are currently not supported
[Dec 9 03:33] overlayfs: idmapped layers are currently not supported
[ +23.720849] overlayfs: idmapped layers are currently not supported
[Dec 9 03:34] overlayfs: idmapped layers are currently not supported
[Dec 9 03:35] overlayfs: idmapped layers are currently not supported
[Dec 9 03:36] overlayfs: idmapped layers are currently not supported
[Dec 9 03:37] overlayfs: idmapped layers are currently not supported
[Dec 9 03:38] overlayfs: idmapped layers are currently not supported
[ +23.656275] overlayfs: idmapped layers are currently not supported
[Dec 9 03:39] overlayfs: idmapped layers are currently not supported
[Dec 9 03:57] overlayfs: idmapped layers are currently not supported
[Dec 9 03:58] overlayfs: idmapped layers are currently not supported
[Dec 9 04:00] overlayfs: idmapped layers are currently not supported
[Dec 9 04:02] overlayfs: idmapped layers are currently not supported
[Dec 9 04:03] overlayfs: idmapped layers are currently not supported
[Dec 9 04:05] overlayfs: idmapped layers are currently not supported
[Dec 9 04:06] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
04:26:49 up 7:08, 0 user, load average: 0.08, 0.49, 1.11
Linux functional-667319 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 09 04:26:46 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 09 04:26:46 functional-667319 kubelet[4664]: E1209 04:26:46.480101 4664 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 09 04:26:46 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 04:26:46 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 09 04:26:47 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 09 04:26:47 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 09 04:26:47 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 09 04:26:47 functional-667319 kubelet[4670]: E1209 04:26:47.225776 4670 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 09 04:26:47 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 04:26:47 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 09 04:26:47 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 09 04:26:47 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 09 04:26:47 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 09 04:26:47 functional-667319 kubelet[4675]: E1209 04:26:47.981193 4675 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 09 04:26:47 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 04:26:47 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 09 04:26:48 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 09 04:26:48 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 09 04:26:48 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 09 04:26:48 functional-667319 kubelet[4769]: E1209 04:26:48.755143 4769 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 09 04:26:48 functional-667319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 04:26:48 functional-667319 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 09 04:26:49 functional-667319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 09 04:26:49 functional-667319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 09 04:26:49 functional-667319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-667319 -n functional-667319: exit status 6 (322.024604ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1209 04:26:49.843522 1187351 status.go:458] kubeconfig endpoint: get endpoint: "functional-667319" does not appear in /home/jenkins/minikube-integration/22081-1142328/kubeconfig
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-667319" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (500.70s)