=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1222 00:14:13.002843 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:29.153934 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:16:56.849136 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:07.827857 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:07.833192 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:07.843564 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:07.863887 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:07.904281 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:07.984676 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:08.144936 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:08.465550 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:09.106568 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:10.386867 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:12.947189 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:18.067916 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:28.308151 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:18:48.788416 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:19:29.749117 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:20:51.671033 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-722318/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1222 00:21:29.153667 1396864 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/addons-984861/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 109 (8m20.197079034s)
-- stdout --
* [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22179
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-973657" primary control-plane node in "functional-973657" cluster
* Pulling base image v0.0.48-1766219634-22260 ...
* Found network options:
- HTTP_PROXY=localhost:38127
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:38127 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-973657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-973657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000284266s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001264351s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001264351s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-973657
helpers_test.go:244: (dbg) docker inspect functional-973657:
-- stdout --
[
{
"Id": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
"Created": "2025-12-22T00:13:58.968084222Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1435135,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-22T00:13:59.032592154Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:065a636b8735485f57df2b02ed6532902f189a9c5dc304ae0ae68a778e1c9b2c",
"ResolvConfPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hostname",
"HostsPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/hosts",
"LogPath": "/var/lib/docker/containers/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1/66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1-json.log",
"Name": "/functional-973657",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"functional-973657:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-973657",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "66803363da2c02a83814dda1c0764d3abdab5acc630ac08f6a997102221d51a1",
"LowerDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7-init/diff:/var/lib/docker/overlay2/fdfc0dccbb3b6766b7981a5669907f62e120bbe774767ccbee34e6115374625f/diff",
"MergedDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/merged",
"UpperDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/diff",
"WorkDir": "/var/lib/docker/overlay2/5e1ad55fc7940958673405b2a5d9d7701d300a0b94ebc0c871b8eb28331634c7/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "functional-973657",
"Source": "/var/lib/docker/volumes/functional-973657/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "functional-973657",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-973657",
"name.minikube.sigs.k8s.io": "functional-973657",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "9a7415dc7cc8c69d402ee20ae768f9dea6a8f4f19a78dc21532ae8d42f3e7899",
"SandboxKey": "/var/run/docker/netns/9a7415dc7cc8",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "38390"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "38391"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "38394"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "38392"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "38393"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-973657": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "ee:06:b1:ad:4a:31",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "1e42e4d66b505bfd04d5446c52717be20340997733545e1d4203ef38f80c0dbb",
"EndpointID": "cfb1e4a2d5409c3c16cc85466e1253884fb8124967c81f53a3b06011e2792928",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-973657",
"66803363da2c"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-973657 -n functional-973657: exit status 6 (303.933152ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1222 00:22:14.550442 1440313 status.go:458] kubeconfig endpoint: get endpoint: "functional-973657" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-973657 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-722318 ssh sudo umount -f /mount-9p │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ ssh │ functional-722318 ssh findmnt -T /mount-9p | grep 9p │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ │
│ mount │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdspecific-port4022870112/001:/mount-9p --alsologtostderr -v=1 --port 45835 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ │
│ ssh │ functional-722318 ssh findmnt -T /mount-9p | grep 9p │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ ssh │ functional-722318 ssh -- ls -la /mount-9p │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ ssh │ functional-722318 ssh sudo umount -f /mount-9p │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ │
│ mount │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount1 --alsologtostderr -v=1 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ │
│ mount │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount2 --alsologtostderr -v=1 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ │
│ mount │ -p functional-722318 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1841505609/001:/mount3 --alsologtostderr -v=1 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ │
│ ssh │ functional-722318 ssh findmnt -T /mount1 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ ssh │ functional-722318 ssh findmnt -T /mount2 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ ssh │ functional-722318 ssh findmnt -T /mount3 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ mount │ -p functional-722318 --kill=true │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ │
│ update-context │ functional-722318 update-context --alsologtostderr -v=2 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ update-context │ functional-722318 update-context --alsologtostderr -v=2 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ update-context │ functional-722318 update-context --alsologtostderr -v=2 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ image │ functional-722318 image ls --format short --alsologtostderr │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ image │ functional-722318 image ls --format yaml --alsologtostderr │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ ssh │ functional-722318 ssh pgrep buildkitd │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ │
│ image │ functional-722318 image build -t localhost/my-image:functional-722318 testdata/build --alsologtostderr │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ image │ functional-722318 image ls --format json --alsologtostderr │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ image │ functional-722318 image ls --format table --alsologtostderr │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ image │ functional-722318 image ls │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ delete │ -p functional-722318 │ functional-722318 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ 22 Dec 25 00:13 UTC │
│ start │ -p functional-973657 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ functional-973657 │ jenkins │ v1.37.0 │ 22 Dec 25 00:13 UTC │ │
└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/22 00:13:54
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1222 00:13:54.080388 1434747 out.go:360] Setting OutFile to fd 1 ...
I1222 00:13:54.080481 1434747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:13:54.080485 1434747 out.go:374] Setting ErrFile to fd 2...
I1222 00:13:54.080496 1434747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1222 00:13:54.080860 1434747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22179-1395000/.minikube/bin
I1222 00:13:54.081436 1434747 out.go:368] Setting JSON to false
I1222 00:13:54.082898 1434747 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":111387,"bootTime":1766251047,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1222 00:13:54.082985 1434747 start.go:143] virtualization:
I1222 00:13:54.087427 1434747 out.go:179] * [functional-973657] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1222 00:13:54.092042 1434747 out.go:179] - MINIKUBE_LOCATION=22179
I1222 00:13:54.092107 1434747 notify.go:221] Checking for updates...
I1222 00:13:54.099346 1434747 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1222 00:13:54.102666 1434747 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22179-1395000/kubeconfig
I1222 00:13:54.105930 1434747 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22179-1395000/.minikube
I1222 00:13:54.109197 1434747 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1222 00:13:54.112509 1434747 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1222 00:13:54.115849 1434747 driver.go:422] Setting default libvirt URI to qemu:///system
I1222 00:13:54.146360 1434747 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1222 00:13:54.146473 1434747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1222 00:13:54.206882 1434747 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-22 00:13:54.197215393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1222 00:13:54.206980 1434747 docker.go:319] overlay module found
I1222 00:13:54.210237 1434747 out.go:179] * Using the docker driver based on user configuration
I1222 00:13:54.213269 1434747 start.go:309] selected driver: docker
I1222 00:13:54.213279 1434747 start.go:928] validating driver "docker" against <nil>
I1222 00:13:54.213292 1434747 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1222 00:13:54.214025 1434747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1222 00:13:54.271430 1434747 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-22 00:13:54.261200801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1222 00:13:54.271570 1434747 start_flags.go:329] no existing cluster config was found, will generate one from the flags
I1222 00:13:54.271783 1434747 start_flags.go:995] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1222 00:13:54.274774 1434747 out.go:179] * Using Docker driver with root privileges
I1222 00:13:54.277559 1434747 cni.go:84] Creating CNI manager for ""
I1222 00:13:54.277620 1434747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1222 00:13:54.277628 1434747 start_flags.go:338] Found "CNI" CNI - setting NetworkPlugin=cni
I1222 00:13:54.277696 1434747 start.go:353] cluster config:
{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1222 00:13:54.280848 1434747 out.go:179] * Starting "functional-973657" primary control-plane node in "functional-973657" cluster
I1222 00:13:54.283654 1434747 cache.go:134] Beginning downloading kic base image for docker with containerd
I1222 00:13:54.286673 1434747 out.go:179] * Pulling base image v0.0.48-1766219634-22260 ...
I1222 00:13:54.289679 1434747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon
I1222 00:13:54.289705 1434747 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1222 00:13:54.289763 1434747 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4
I1222 00:13:54.289771 1434747 cache.go:65] Caching tarball of preloaded images
I1222 00:13:54.289852 1434747 preload.go:251] Found /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1222 00:13:54.289858 1434747 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
I1222 00:13:54.290208 1434747 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
I1222 00:13:54.290228 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json: {Name:mke91e43ab8a21d275c8837902e371c28943cb74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 00:13:54.308841 1434747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 in local docker daemon, skipping pull
I1222 00:13:54.308856 1434747 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 exists in daemon, skipping load
I1222 00:13:54.308869 1434747 cache.go:243] Successfully downloaded all kic artifacts
I1222 00:13:54.308900 1434747 start.go:360] acquireMachinesLock for functional-973657: {Name:mk23c5c2a3abc92310900db50002bad061b76c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1222 00:13:54.309028 1434747 start.go:364] duration metric: took 112.871µs to acquireMachinesLock for "functional-973657"
I1222 00:13:54.309058 1434747 start.go:93] Provisioning new machine with config: &{Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1222 00:13:54.309124 1434747 start.go:125] createHost starting for "" (driver="docker")
I1222 00:13:54.312509 1434747 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1222 00:13:54.312818 1434747 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:38127 to docker env.
I1222 00:13:54.312842 1434747 start.go:159] libmachine.API.Create for "functional-973657" (driver="docker")
I1222 00:13:54.312867 1434747 client.go:173] LocalClient.Create starting
I1222 00:13:54.312950 1434747 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem
I1222 00:13:54.312984 1434747 main.go:144] libmachine: Decoding PEM data...
I1222 00:13:54.312998 1434747 main.go:144] libmachine: Parsing certificate...
I1222 00:13:54.313051 1434747 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem
I1222 00:13:54.313071 1434747 main.go:144] libmachine: Decoding PEM data...
I1222 00:13:54.313081 1434747 main.go:144] libmachine: Parsing certificate...
I1222 00:13:54.313425 1434747 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1222 00:13:54.330156 1434747 cli_runner.go:211] docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1222 00:13:54.330236 1434747 network_create.go:284] running [docker network inspect functional-973657] to gather additional debugging logs...
I1222 00:13:54.330250 1434747 cli_runner.go:164] Run: docker network inspect functional-973657
W1222 00:13:54.346156 1434747 cli_runner.go:211] docker network inspect functional-973657 returned with exit code 1
I1222 00:13:54.346175 1434747 network_create.go:287] error running [docker network inspect functional-973657]: docker network inspect functional-973657: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-973657 not found
I1222 00:13:54.346187 1434747 network_create.go:289] output of [docker network inspect functional-973657]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-973657 not found
** /stderr **
I1222 00:13:54.346291 1434747 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1222 00:13:54.362622 1434747 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001924a20}
I1222 00:13:54.362654 1434747 network_create.go:124] attempt to create docker network functional-973657 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1222 00:13:54.362715 1434747 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-973657 functional-973657
I1222 00:13:54.422599 1434747 network_create.go:108] docker network functional-973657 192.168.49.0/24 created
I1222 00:13:54.422631 1434747 kic.go:121] calculated static IP "192.168.49.2" for the "functional-973657" container
I1222 00:13:54.422703 1434747 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1222 00:13:54.438726 1434747 cli_runner.go:164] Run: docker volume create functional-973657 --label name.minikube.sigs.k8s.io=functional-973657 --label created_by.minikube.sigs.k8s.io=true
I1222 00:13:54.457240 1434747 oci.go:103] Successfully created a docker volume functional-973657
I1222 00:13:54.457311 1434747 cli_runner.go:164] Run: docker run --rm --name functional-973657-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-973657 --entrypoint /usr/bin/test -v functional-973657:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -d /var/lib
I1222 00:13:55.013277 1434747 oci.go:107] Successfully prepared a docker volume functional-973657
I1222 00:13:55.013347 1434747 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1222 00:13:55.013356 1434747 kic.go:194] Starting extracting preloaded images to volume ...
I1222 00:13:55.013435 1434747 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-973657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir
I1222 00:13:58.883595 1434747 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22179-1395000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-973657:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 -I lz4 -xf /preloaded.tar -C /extractDir: (3.870121267s)
I1222 00:13:58.883614 1434747 kic.go:203] duration metric: took 3.87025524s to extract preloaded images to volume ...
W1222 00:13:58.883773 1434747 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1222 00:13:58.883880 1434747 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1222 00:13:58.950629 1434747 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-973657 --name functional-973657 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-973657 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-973657 --network functional-973657 --ip 192.168.49.2 --volume functional-973657:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5
I1222 00:13:59.256535 1434747 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Running}}
I1222 00:13:59.278641 1434747 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
I1222 00:13:59.309334 1434747 cli_runner.go:164] Run: docker exec functional-973657 stat /var/lib/dpkg/alternatives/iptables
I1222 00:13:59.364524 1434747 oci.go:144] the created container "functional-973657" has a running status.
I1222 00:13:59.364543 1434747 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa...
I1222 00:13:59.465321 1434747 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1222 00:13:59.489428 1434747 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
I1222 00:13:59.521106 1434747 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1222 00:13:59.521117 1434747 kic_runner.go:114] Args: [docker exec --privileged functional-973657 chown docker:docker /home/docker/.ssh/authorized_keys]
I1222 00:13:59.567451 1434747 cli_runner.go:164] Run: docker container inspect functional-973657 --format={{.State.Status}}
I1222 00:13:59.592058 1434747 machine.go:94] provisionDockerMachine start ...
I1222 00:13:59.592149 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:13:59.612600 1434747 main.go:144] libmachine: Using SSH client type: native
I1222 00:13:59.612961 1434747 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil> [] 0s} 127.0.0.1 38390 <nil> <nil>}
I1222 00:13:59.612968 1434747 main.go:144] libmachine: About to run SSH command:
hostname
I1222 00:13:59.613675 1434747 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1222 00:14:02.750311 1434747 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
I1222 00:14:02.750327 1434747 ubuntu.go:182] provisioning hostname "functional-973657"
I1222 00:14:02.750403 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:14:02.768314 1434747 main.go:144] libmachine: Using SSH client type: native
I1222 00:14:02.768621 1434747 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil> [] 0s} 127.0.0.1 38390 <nil> <nil>}
I1222 00:14:02.768630 1434747 main.go:144] libmachine: About to run SSH command:
sudo hostname functional-973657 && echo "functional-973657" | sudo tee /etc/hostname
I1222 00:14:02.911259 1434747 main.go:144] libmachine: SSH cmd err, output: <nil>: functional-973657
I1222 00:14:02.911346 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:14:02.930003 1434747 main.go:144] libmachine: Using SSH client type: native
I1222 00:14:02.930327 1434747 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db5e0] 0x3ddae0 <nil> [] 0s} 127.0.0.1 38390 <nil> <nil>}
I1222 00:14:02.930343 1434747 main.go:144] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-973657' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-973657/g' /etc/hosts;
else
echo '127.0.1.1 functional-973657' | sudo tee -a /etc/hosts;
fi
fi
I1222 00:14:03.062333 1434747 main.go:144] libmachine: SSH cmd err, output: <nil>:
I1222 00:14:03.062352 1434747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22179-1395000/.minikube CaCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22179-1395000/.minikube}
I1222 00:14:03.062399 1434747 ubuntu.go:190] setting up certificates
I1222 00:14:03.062410 1434747 provision.go:84] configureAuth start
I1222 00:14:03.062478 1434747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
I1222 00:14:03.079790 1434747 provision.go:143] copyHostCerts
I1222 00:14:03.079863 1434747 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem, removing ...
I1222 00:14:03.079870 1434747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem
I1222 00:14:03.079944 1434747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.pem (1082 bytes)
I1222 00:14:03.080042 1434747 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem, removing ...
I1222 00:14:03.080045 1434747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem
I1222 00:14:03.080070 1434747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/cert.pem (1123 bytes)
I1222 00:14:03.080125 1434747 exec_runner.go:144] found /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem, removing ...
I1222 00:14:03.080128 1434747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem
I1222 00:14:03.080150 1434747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22179-1395000/.minikube/key.pem (1679 bytes)
I1222 00:14:03.080238 1434747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem org=jenkins.functional-973657 san=[127.0.0.1 192.168.49.2 functional-973657 localhost minikube]
I1222 00:14:03.328502 1434747 provision.go:177] copyRemoteCerts
I1222 00:14:03.328563 1434747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1222 00:14:03.328603 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:14:03.347011 1434747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
I1222 00:14:03.441755 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1222 00:14:03.458877 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1222 00:14:03.475601 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1222 00:14:03.492713 1434747 provision.go:87] duration metric: took 430.279769ms to configureAuth
I1222 00:14:03.492730 1434747 ubuntu.go:206] setting minikube options for container-runtime
I1222 00:14:03.492928 1434747 config.go:182] Loaded profile config "functional-973657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1222 00:14:03.492936 1434747 machine.go:97] duration metric: took 3.900869097s to provisionDockerMachine
I1222 00:14:03.492942 1434747 client.go:176] duration metric: took 9.180070402s to LocalClient.Create
I1222 00:14:03.492963 1434747 start.go:167] duration metric: took 9.180120208s to libmachine.API.Create "functional-973657"
I1222 00:14:03.492970 1434747 start.go:293] postStartSetup for "functional-973657" (driver="docker")
I1222 00:14:03.492979 1434747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1222 00:14:03.493041 1434747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1222 00:14:03.493078 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:14:03.511078 1434747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
I1222 00:14:03.610118 1434747 ssh_runner.go:195] Run: cat /etc/os-release
I1222 00:14:03.613490 1434747 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1222 00:14:03.613519 1434747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1222 00:14:03.613529 1434747 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/addons for local assets ...
I1222 00:14:03.613587 1434747 filesync.go:126] Scanning /home/jenkins/minikube-integration/22179-1395000/.minikube/files for local assets ...
I1222 00:14:03.613674 1434747 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem -> 13968642.pem in /etc/ssl/certs
I1222 00:14:03.613754 1434747 filesync.go:149] local asset: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts -> hosts in /etc/test/nested/copy/1396864
I1222 00:14:03.613801 1434747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1396864
I1222 00:14:03.621527 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /etc/ssl/certs/13968642.pem (1708 bytes)
I1222 00:14:03.639649 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/test/nested/copy/1396864/hosts --> /etc/test/nested/copy/1396864/hosts (40 bytes)
I1222 00:14:03.656727 1434747 start.go:296] duration metric: took 163.743932ms for postStartSetup
I1222 00:14:03.657123 1434747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
I1222 00:14:03.674188 1434747 profile.go:143] Saving config to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/config.json ...
I1222 00:14:03.674454 1434747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1222 00:14:03.674492 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:14:03.691654 1434747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
I1222 00:14:03.786935 1434747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1222 00:14:03.791473 1434747 start.go:128] duration metric: took 9.482336534s to createHost
I1222 00:14:03.791513 1434747 start.go:83] releasing machines lock for "functional-973657", held for 9.482466676s
I1222 00:14:03.791597 1434747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-973657
I1222 00:14:03.812998 1434747 out.go:179] * Found network options:
I1222 00:14:03.815917 1434747 out.go:179] - HTTP_PROXY=localhost:38127
W1222 00:14:03.818791 1434747 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1222 00:14:03.821578 1434747 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1222 00:14:03.824479 1434747 ssh_runner.go:195] Run: cat /version.json
I1222 00:14:03.824525 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:14:03.824553 1434747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1222 00:14:03.824607 1434747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-973657
I1222 00:14:03.844079 1434747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
I1222 00:14:03.855729 1434747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38390 SSHKeyPath:/home/jenkins/minikube-integration/22179-1395000/.minikube/machines/functional-973657/id_rsa Username:docker}
I1222 00:14:04.033533 1434747 ssh_runner.go:195] Run: systemctl --version
I1222 00:14:04.040311 1434747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1222 00:14:04.044648 1434747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1222 00:14:04.044717 1434747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1222 00:14:04.072385 1434747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1222 00:14:04.072399 1434747 start.go:496] detecting cgroup driver to use...
I1222 00:14:04.072464 1434747 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1222 00:14:04.072542 1434747 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1222 00:14:04.088149 1434747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1222 00:14:04.101314 1434747 docker.go:218] disabling cri-docker service (if available) ...
I1222 00:14:04.101377 1434747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1222 00:14:04.119632 1434747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1222 00:14:04.138566 1434747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1222 00:14:04.261420 1434747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1222 00:14:04.384274 1434747 docker.go:234] disabling docker service ...
I1222 00:14:04.384399 1434747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1222 00:14:04.408049 1434747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1222 00:14:04.422659 1434747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1222 00:14:04.546222 1434747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1222 00:14:04.677163 1434747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1222 00:14:04.690211 1434747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1222 00:14:04.705148 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1222 00:14:04.714539 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1222 00:14:04.724063 1434747 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1222 00:14:04.724120 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1222 00:14:04.733018 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1222 00:14:04.741947 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1222 00:14:04.750900 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1222 00:14:04.760026 1434747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1222 00:14:04.768055 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1222 00:14:04.776956 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1222 00:14:04.786324 1434747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1222 00:14:04.795551 1434747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1222 00:14:04.803390 1434747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1222 00:14:04.811187 1434747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1222 00:14:04.928877 1434747 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1222 00:14:05.074928 1434747 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1222 00:14:05.075008 1434747 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1222 00:14:05.079227 1434747 start.go:564] Will wait 60s for crictl version
I1222 00:14:05.079304 1434747 ssh_runner.go:195] Run: which crictl
I1222 00:14:05.083261 1434747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1222 00:14:05.112631 1434747 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.1
RuntimeApiVersion: v1
I1222 00:14:05.112701 1434747 ssh_runner.go:195] Run: containerd --version
I1222 00:14:05.135087 1434747 ssh_runner.go:195] Run: containerd --version
I1222 00:14:05.160292 1434747 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.1 ...
I1222 00:14:05.163313 1434747 cli_runner.go:164] Run: docker network inspect functional-973657 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1222 00:14:05.181060 1434747 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1222 00:14:05.185532 1434747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1222 00:14:05.196784 1434747 kubeadm.go:884] updating cluster {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1222 00:14:05.196908 1434747 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1222 00:14:05.196976 1434747 ssh_runner.go:195] Run: sudo crictl images --output json
I1222 00:14:05.223983 1434747 containerd.go:627] all images are preloaded for containerd runtime.
I1222 00:14:05.223996 1434747 containerd.go:534] Images already preloaded, skipping extraction
I1222 00:14:05.224059 1434747 ssh_runner.go:195] Run: sudo crictl images --output json
I1222 00:14:05.248737 1434747 containerd.go:627] all images are preloaded for containerd runtime.
I1222 00:14:05.248749 1434747 cache_images.go:86] Images are preloaded, skipping loading
I1222 00:14:05.248755 1434747 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-rc.1 containerd true true} ...
I1222 00:14:05.248868 1434747 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-973657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1222 00:14:05.248939 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
I1222 00:14:05.273499 1434747 cni.go:84] Creating CNI manager for ""
I1222 00:14:05.273509 1434747 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1222 00:14:05.273530 1434747 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1222 00:14:05.273551 1434747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-973657 NodeName:functional-973657 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1222 00:14:05.273664 1434747 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-973657"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-rc.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1222 00:14:05.273736 1434747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
I1222 00:14:05.281614 1434747 binaries.go:51] Found k8s binaries, skipping transfer
I1222 00:14:05.281686 1434747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1222 00:14:05.289484 1434747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
I1222 00:14:05.302693 1434747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
I1222 00:14:05.316167 1434747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
I1222 00:14:05.329431 1434747 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1222 00:14:05.333276 1434747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1222 00:14:05.344430 1434747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1222 00:14:05.454216 1434747 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1222 00:14:05.470647 1434747 certs.go:69] Setting up /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657 for IP: 192.168.49.2
I1222 00:14:05.470657 1434747 certs.go:195] generating shared ca certs ...
I1222 00:14:05.470671 1434747 certs.go:227] acquiring lock for ca certs: {Name:mk4e1172e73c8d9b926824a39d7e920772302ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 00:14:05.470821 1434747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key
I1222 00:14:05.470862 1434747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key
I1222 00:14:05.470867 1434747 certs.go:257] generating profile certs ...
I1222 00:14:05.470924 1434747 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key
I1222 00:14:05.470934 1434747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt with IP's: []
I1222 00:14:05.592301 1434747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt ...
I1222 00:14:05.592323 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.crt: {Name:mk2008a9f32332b0a767a07c3d3596e331ba3c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 00:14:05.592530 1434747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key ...
I1222 00:14:05.592536 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/client.key: {Name:mkfa9740b6557facdd822dc4551cf4042fd71055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 00:14:05.592627 1434747 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081
I1222 00:14:05.592638 1434747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt.ec70d081 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1222 00:14:05.833387 1434747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt.ec70d081 ...
I1222 00:14:05.833402 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt.ec70d081: {Name:mk3271dbccbe2a4dc365c84e739d17aefc5b2369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 00:14:05.833603 1434747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081 ...
I1222 00:14:05.833615 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081: {Name:mkdada2538e53460dbc1c20a3031cc8f693c8bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 00:14:05.833707 1434747 certs.go:382] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt.ec70d081 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt
I1222 00:14:05.833787 1434747 certs.go:386] copying /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key.ec70d081 -> /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key
I1222 00:14:05.833840 1434747 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key
I1222 00:14:05.833852 1434747 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt with IP's: []
I1222 00:14:06.387999 1434747 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt ...
I1222 00:14:06.388016 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt: {Name:mk402bafb7d5e77d7080a6a362b82e18cc99f2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 00:14:06.388215 1434747 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key ...
I1222 00:14:06.388224 1434747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key: {Name:mkc90908b315943a306e167fccdf777e14bf27fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1222 00:14:06.388409 1434747 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem (1338 bytes)
W1222 00:14:06.388455 1434747 certs.go:480] ignoring /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864_empty.pem, impossibly tiny 0 bytes
I1222 00:14:06.388463 1434747 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca-key.pem (1675 bytes)
I1222 00:14:06.388493 1434747 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/ca.pem (1082 bytes)
I1222 00:14:06.388517 1434747 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/cert.pem (1123 bytes)
I1222 00:14:06.388540 1434747 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/key.pem (1679 bytes)
I1222 00:14:06.388584 1434747 certs.go:484] found cert: /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem (1708 bytes)
I1222 00:14:06.389160 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1222 00:14:06.409771 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1222 00:14:06.429468 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1222 00:14:06.448353 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1222 00:14:06.467407 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1222 00:14:06.485305 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1222 00:14:06.504007 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1222 00:14:06.523900 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/profiles/functional-973657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1222 00:14:06.542211 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/files/etc/ssl/certs/13968642.pem --> /usr/share/ca-certificates/13968642.pem (1708 bytes)
I1222 00:14:06.561160 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1222 00:14:06.580129 1434747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22179-1395000/.minikube/certs/1396864.pem --> /usr/share/ca-certificates/1396864.pem (1338 bytes)
I1222 00:14:06.598187 1434747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
I1222 00:14:06.611358 1434747 ssh_runner.go:195] Run: openssl version
I1222 00:14:06.617865 1434747 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13968642.pem
I1222 00:14:06.625738 1434747 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13968642.pem /etc/ssl/certs/13968642.pem
I1222 00:14:06.633633 1434747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13968642.pem
I1222 00:14:06.637729 1434747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 22 00:13 /usr/share/ca-certificates/13968642.pem
I1222 00:14:06.637801 1434747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13968642.pem
I1222 00:14:06.679835 1434747 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1222 00:14:06.687530 1434747 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13968642.pem /etc/ssl/certs/3ec20f2e.0
I1222 00:14:06.695222 1434747 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1222 00:14:06.702721 1434747 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1222 00:14:06.710446 1434747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1222 00:14:06.714802 1434747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 22 00:04 /usr/share/ca-certificates/minikubeCA.pem
I1222 00:14:06.714864 1434747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1222 00:14:06.757681 1434747 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1222 00:14:06.765501 1434747 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1222 00:14:06.773216 1434747 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1396864.pem
I1222 00:14:06.780813 1434747 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1396864.pem /etc/ssl/certs/1396864.pem
I1222 00:14:06.789077 1434747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1396864.pem
I1222 00:14:06.792836 1434747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 22 00:13 /usr/share/ca-certificates/1396864.pem
I1222 00:14:06.792919 1434747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1396864.pem
I1222 00:14:06.837792 1434747 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1222 00:14:06.845379 1434747 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1396864.pem /etc/ssl/certs/51391683.0
I1222 00:14:06.852862 1434747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1222 00:14:06.856390 1434747 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1222 00:14:06.856434 1434747 kubeadm.go:401] StartCluster: {Name:functional-973657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766219634-22260@sha256:a916181ae166850e036ee1da6e28cd4888bd2a1d8dd51b68e1b213ae6c4370b5 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-973657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1222 00:14:06.856503 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1222 00:14:06.856569 1434747 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1222 00:14:06.884928 1434747 cri.go:96] found id: ""
I1222 00:14:06.885003 1434747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1222 00:14:06.892959 1434747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1222 00:14:06.900700 1434747 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1222 00:14:06.900762 1434747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1222 00:14:06.908686 1434747 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1222 00:14:06.908699 1434747 kubeadm.go:158] found existing configuration files:
I1222 00:14:06.908748 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1222 00:14:06.916410 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1222 00:14:06.916466 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1222 00:14:06.924171 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1222 00:14:06.932334 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1222 00:14:06.932392 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1222 00:14:06.940270 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1222 00:14:06.948186 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1222 00:14:06.948243 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1222 00:14:06.955886 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1222 00:14:06.963816 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1222 00:14:06.963872 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1222 00:14:06.971497 1434747 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1222 00:14:07.038574 1434747 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
I1222 00:14:07.039052 1434747 kubeadm.go:319] [preflight] Running pre-flight checks
I1222 00:14:07.114570 1434747 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1222 00:14:07.114641 1434747 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1222 00:14:07.114675 1434747 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1222 00:14:07.114743 1434747 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1222 00:14:07.114807 1434747 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1222 00:14:07.114854 1434747 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1222 00:14:07.114901 1434747 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1222 00:14:07.114966 1434747 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1222 00:14:07.115023 1434747 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1222 00:14:07.115075 1434747 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1222 00:14:07.115123 1434747 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1222 00:14:07.115168 1434747 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1222 00:14:07.187029 1434747 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1222 00:14:07.187134 1434747 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1222 00:14:07.187225 1434747 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1222 00:14:07.194520 1434747 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1222 00:14:07.200872 1434747 out.go:252] - Generating certificates and keys ...
I1222 00:14:07.200992 1434747 kubeadm.go:319] [certs] Using existing ca certificate authority
I1222 00:14:07.201058 1434747 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1222 00:14:07.471321 1434747 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1222 00:14:07.831434 1434747 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1222 00:14:07.944775 1434747 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1222 00:14:08.198934 1434747 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1222 00:14:08.637875 1434747 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1222 00:14:08.638218 1434747 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-973657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1222 00:14:08.719954 1434747 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1222 00:14:08.720288 1434747 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-973657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1222 00:14:08.778606 1434747 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1222 00:14:08.944050 1434747 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1222 00:14:09.171967 1434747 kubeadm.go:319] [certs] Generating "sa" key and public key
I1222 00:14:09.172092 1434747 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1222 00:14:09.263026 1434747 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1222 00:14:09.523345 1434747 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1222 00:14:10.037237 1434747 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1222 00:14:10.849577 1434747 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1222 00:14:11.642270 1434747 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1222 00:14:11.643007 1434747 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1222 00:14:11.647842 1434747 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1222 00:14:11.651572 1434747 out.go:252] - Booting up control plane ...
I1222 00:14:11.651677 1434747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1222 00:14:11.651764 1434747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1222 00:14:11.652321 1434747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1222 00:14:11.668970 1434747 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1222 00:14:11.669271 1434747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1222 00:14:11.677212 1434747 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1222 00:14:11.677475 1434747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1222 00:14:11.677516 1434747 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1222 00:14:11.814736 1434747 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1222 00:14:11.814848 1434747 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1222 00:18:11.810523 1434747 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000284266s
I1222 00:18:11.810543 1434747 kubeadm.go:319]
I1222 00:18:11.810599 1434747 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1222 00:18:11.810631 1434747 kubeadm.go:319] - The kubelet is not running
I1222 00:18:11.810735 1434747 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1222 00:18:11.810738 1434747 kubeadm.go:319]
I1222 00:18:11.810842 1434747 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1222 00:18:11.810872 1434747 kubeadm.go:319] - 'systemctl status kubelet'
I1222 00:18:11.810902 1434747 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1222 00:18:11.810905 1434747 kubeadm.go:319]
I1222 00:18:11.816392 1434747 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1222 00:18:11.816886 1434747 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1222 00:18:11.817005 1434747 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1222 00:18:11.817260 1434747 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1222 00:18:11.817264 1434747 kubeadm.go:319]
W1222 00:18:11.817461 1434747 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-973657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-973657 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000284266s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1222 00:18:11.817551 1434747 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1222 00:18:11.818136 1434747 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1222 00:18:12.224633 1434747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1222 00:18:12.237999 1434747 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1222 00:18:12.238053 1434747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1222 00:18:12.246163 1434747 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1222 00:18:12.246172 1434747 kubeadm.go:158] found existing configuration files:
I1222 00:18:12.246225 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1222 00:18:12.254057 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1222 00:18:12.254133 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1222 00:18:12.261335 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1222 00:18:12.269033 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1222 00:18:12.269088 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1222 00:18:12.276596 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1222 00:18:12.284147 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1222 00:18:12.284200 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1222 00:18:12.291464 1434747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1222 00:18:12.298844 1434747 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1222 00:18:12.298907 1434747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1222 00:18:12.306236 1434747 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1222 00:18:12.342156 1434747 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
I1222 00:18:12.342204 1434747 kubeadm.go:319] [preflight] Running pre-flight checks
I1222 00:18:12.415534 1434747 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1222 00:18:12.415599 1434747 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1222 00:18:12.415633 1434747 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1222 00:18:12.415678 1434747 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1222 00:18:12.415725 1434747 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1222 00:18:12.415772 1434747 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1222 00:18:12.415828 1434747 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1222 00:18:12.415876 1434747 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1222 00:18:12.415928 1434747 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1222 00:18:12.415972 1434747 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1222 00:18:12.416018 1434747 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1222 00:18:12.416064 1434747 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1222 00:18:12.482634 1434747 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1222 00:18:12.482785 1434747 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1222 00:18:12.482888 1434747 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1222 00:18:12.494592 1434747 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1222 00:18:12.499673 1434747 out.go:252] - Generating certificates and keys ...
I1222 00:18:12.499775 1434747 kubeadm.go:319] [certs] Using existing ca certificate authority
I1222 00:18:12.499844 1434747 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1222 00:18:12.499940 1434747 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1222 00:18:12.500008 1434747 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1222 00:18:12.500082 1434747 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1222 00:18:12.500140 1434747 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1222 00:18:12.500210 1434747 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1222 00:18:12.500279 1434747 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1222 00:18:12.500369 1434747 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1222 00:18:12.500470 1434747 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1222 00:18:12.500519 1434747 kubeadm.go:319] [certs] Using the existing "sa" key
I1222 00:18:12.500579 1434747 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1222 00:18:12.662860 1434747 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1222 00:18:13.066868 1434747 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1222 00:18:13.350920 1434747 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1222 00:18:13.534793 1434747 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1222 00:18:13.594486 1434747 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1222 00:18:13.595073 1434747 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1222 00:18:13.597621 1434747 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1222 00:18:13.600727 1434747 out.go:252] - Booting up control plane ...
I1222 00:18:13.600826 1434747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1222 00:18:13.600912 1434747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1222 00:18:13.601527 1434747 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1222 00:18:13.621978 1434747 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1222 00:18:13.622109 1434747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1222 00:18:13.631022 1434747 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1222 00:18:13.631970 1434747 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1222 00:18:13.632083 1434747 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1222 00:18:13.775225 1434747 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1222 00:18:13.775337 1434747 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1222 00:22:13.776476 1434747 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001264351s
I1222 00:22:13.776503 1434747 kubeadm.go:319]
I1222 00:22:13.776569 1434747 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1222 00:22:13.776626 1434747 kubeadm.go:319] - The kubelet is not running
I1222 00:22:13.776735 1434747 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1222 00:22:13.776739 1434747 kubeadm.go:319]
I1222 00:22:13.776843 1434747 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1222 00:22:13.776873 1434747 kubeadm.go:319] - 'systemctl status kubelet'
I1222 00:22:13.776902 1434747 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1222 00:22:13.776907 1434747 kubeadm.go:319]
I1222 00:22:13.781004 1434747 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1222 00:22:13.781415 1434747 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1222 00:22:13.781523 1434747 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1222 00:22:13.781757 1434747 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1222 00:22:13.781761 1434747 kubeadm.go:319]
I1222 00:22:13.781829 1434747 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1222 00:22:13.781894 1434747 kubeadm.go:403] duration metric: took 8m6.925464632s to StartCluster
I1222 00:22:13.781926 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1222 00:22:13.781991 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
I1222 00:22:13.807444 1434747 cri.go:96] found id: ""
I1222 00:22:13.807466 1434747 logs.go:282] 0 containers: []
W1222 00:22:13.807473 1434747 logs.go:284] No container was found matching "kube-apiserver"
I1222 00:22:13.807479 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1222 00:22:13.807537 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
I1222 00:22:13.831579 1434747 cri.go:96] found id: ""
I1222 00:22:13.831593 1434747 logs.go:282] 0 containers: []
W1222 00:22:13.831600 1434747 logs.go:284] No container was found matching "etcd"
I1222 00:22:13.831606 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1222 00:22:13.831670 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
I1222 00:22:13.857008 1434747 cri.go:96] found id: ""
I1222 00:22:13.857024 1434747 logs.go:282] 0 containers: []
W1222 00:22:13.857031 1434747 logs.go:284] No container was found matching "coredns"
I1222 00:22:13.857036 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1222 00:22:13.857096 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
I1222 00:22:13.883759 1434747 cri.go:96] found id: ""
I1222 00:22:13.883772 1434747 logs.go:282] 0 containers: []
W1222 00:22:13.883778 1434747 logs.go:284] No container was found matching "kube-scheduler"
I1222 00:22:13.883784 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1222 00:22:13.883841 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
I1222 00:22:13.912604 1434747 cri.go:96] found id: ""
I1222 00:22:13.912618 1434747 logs.go:282] 0 containers: []
W1222 00:22:13.912625 1434747 logs.go:284] No container was found matching "kube-proxy"
I1222 00:22:13.912630 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1222 00:22:13.912686 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
I1222 00:22:13.941261 1434747 cri.go:96] found id: ""
I1222 00:22:13.941275 1434747 logs.go:282] 0 containers: []
W1222 00:22:13.941287 1434747 logs.go:284] No container was found matching "kube-controller-manager"
I1222 00:22:13.941292 1434747 cri.go:61] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1222 00:22:13.941351 1434747 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
I1222 00:22:13.967493 1434747 cri.go:96] found id: ""
I1222 00:22:13.967508 1434747 logs.go:282] 0 containers: []
W1222 00:22:13.967516 1434747 logs.go:284] No container was found matching "kindnet"
I1222 00:22:13.967526 1434747 logs.go:123] Gathering logs for describe nodes ...
I1222 00:22:13.967539 1434747 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1222 00:22:14.052311 1434747 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1222 00:22:14.041625 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1222 00:22:14.042560 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1222 00:22:14.044927 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1222 00:22:14.046227 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1222 00:22:14.046628 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1222 00:22:14.041625 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1222 00:22:14.042560 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1222 00:22:14.044927 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1222 00:22:14.046227 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1222 00:22:14.046628 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1222 00:22:14.052322 1434747 logs.go:123] Gathering logs for containerd ...
I1222 00:22:14.052332 1434747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1222 00:22:14.101192 1434747 logs.go:123] Gathering logs for container status ...
I1222 00:22:14.101216 1434747 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1222 00:22:14.129891 1434747 logs.go:123] Gathering logs for kubelet ...
I1222 00:22:14.129911 1434747 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1222 00:22:14.187654 1434747 logs.go:123] Gathering logs for dmesg ...
I1222 00:22:14.187676 1434747 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
W1222 00:22:14.203699 1434747 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001264351s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1222 00:22:14.203745 1434747 out.go:285] *
W1222 00:22:14.204002 1434747 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001264351s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1222 00:22:14.204248 1434747 out.go:285] *
W1222 00:22:14.206475 1434747 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1222 00:22:14.212598 1434747 out.go:203]
W1222 00:22:14.215480 1434747 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-rc.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001264351s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1222 00:22:14.215530 1434747 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1222 00:22:14.215551 1434747 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1222 00:22:14.218629 1434747 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013107696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013128776Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013172493Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013193432Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013209375Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013287521Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013300576Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013320358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013338861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013381118Z" level=info msg="Connect containerd service"
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.013783410Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.014531461Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.032162884Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.032238388Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.032276173Z" level=info msg="Start subscribing containerd event"
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.032330877Z" level=info msg="Start recovering state"
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.071561572Z" level=info msg="Start event monitor"
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.071763854Z" level=info msg="Start cni network conf syncer for default"
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.071833344Z" level=info msg="Start streaming server"
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.071894144Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.071949259Z" level=info msg="runtime interface starting up..."
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.072003758Z" level=info msg="starting plugins..."
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.072069350Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 22 00:14:05 functional-973657 systemd[1]: Started containerd.service - containerd container runtime.
Dec 22 00:14:05 functional-973657 containerd[764]: time="2025-12-22T00:14:05.074200081Z" level=info msg="containerd successfully booted in 0.091082s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1222 00:22:15.230343 4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1222 00:22:15.231133 4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1222 00:22:15.232769 4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1222 00:22:15.233135 4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1222 00:22:15.234652 4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec21 21:36] overlayfs: idmapped layers are currently not supported
[ +34.220408] overlayfs: idmapped layers are currently not supported
[Dec21 21:37] overlayfs: idmapped layers are currently not supported
[Dec21 21:38] overlayfs: idmapped layers are currently not supported
[Dec21 21:39] overlayfs: idmapped layers are currently not supported
[ +42.728151] overlayfs: idmapped layers are currently not supported
[Dec21 21:40] overlayfs: idmapped layers are currently not supported
[Dec21 21:42] overlayfs: idmapped layers are currently not supported
[Dec21 21:43] overlayfs: idmapped layers are currently not supported
[Dec21 22:01] overlayfs: idmapped layers are currently not supported
[Dec21 22:03] overlayfs: idmapped layers are currently not supported
[Dec21 22:04] overlayfs: idmapped layers are currently not supported
[Dec21 22:06] overlayfs: idmapped layers are currently not supported
[Dec21 22:07] overlayfs: idmapped layers are currently not supported
[Dec21 22:09] kauditd_printk_skb: 8 callbacks suppressed
[Dec21 22:19] FS-Cache: Duplicate cookie detected
[ +0.000799] FS-Cache: O-cookie c=000001b7 [p=00000002 fl=222 nc=0 na=1]
[ +0.000997] FS-Cache: O-cookie d=000000006644c6a1{9P.session} n=0000000059d48210
[ +0.001156] FS-Cache: O-key=[10] '34333231303139373837'
[ +0.000780] FS-Cache: N-cookie c=000001b8 [p=00000002 fl=2 nc=0 na=1]
[ +0.000956] FS-Cache: N-cookie d=000000006644c6a1{9P.session} n=000000007a8030ee
[ +0.001187] FS-Cache: N-key=[10] '34333231303139373837'
[Dec22 00:03] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
00:22:15 up 1 day, 7:04, 0 user, load average: 0.26, 0.58, 1.23
Linux functional-973657 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 22 00:22:11 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 22 00:22:12 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 22 00:22:12 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 22 00:22:12 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 22 00:22:12 functional-973657 kubelet[4729]: E1222 00:22:12.534194 4729 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 22 00:22:12 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 22 00:22:12 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 22 00:22:13 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 22 00:22:13 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 22 00:22:13 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 22 00:22:13 functional-973657 kubelet[4735]: E1222 00:22:13.284589 4735 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 22 00:22:13 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 22 00:22:13 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 22 00:22:13 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 22 00:22:13 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 22 00:22:13 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 22 00:22:14 functional-973657 kubelet[4802]: E1222 00:22:14.061752 4802 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 22 00:22:14 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 22 00:22:14 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 22 00:22:14 functional-973657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 22 00:22:14 functional-973657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 22 00:22:14 functional-973657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 22 00:22:14 functional-973657 kubelet[4842]: E1222 00:22:14.798743 4842 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 22 00:22:14 functional-973657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 22 00:22:14 functional-973657 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-973657 -n functional-973657: exit status 6 (340.483444ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1222 00:22:15.684838 1440527 status.go:458] kubeconfig endpoint: get endpoint: "functional-973657" does not appear in /home/jenkins/minikube-integration/22179-1395000/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-973657" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (501.67s)