=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1210 06:25:14.424150 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:25:42.125124 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:35.782593 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:35.789163 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:35.800627 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:35.822106 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:35.863613 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:35.945128 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:36.106721 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:36.428357 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:37.070353 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:38.351889 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:40.914772 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:46.036258 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:27:56.277741 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:28:16.759475 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:28:57.720948 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:30:14.429471 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/addons-868996/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 06:30:19.646145 786751 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-634209/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.266236464s)
-- stdout --
* [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22089
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-534748" primary control-plane node in "functional-534748" cluster
* Pulling base image v0.0.48-1765319469-22089 ...
* Found network options:
- HTTP_PROXY=localhost:46303
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:46303 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-534748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-534748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001121505s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00106645s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00106645s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-534748
helpers_test.go:244: (dbg) docker inspect functional-534748:
-- stdout --
[
{
"Id": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
"Created": "2025-12-10T06:23:23.608302198Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 825111,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-10T06:23:23.673039154Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
"ResolvConfPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hostname",
"HostsPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/hosts",
"LogPath": "/var/lib/docker/containers/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db/afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db-json.log",
"Name": "/functional-534748",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-534748:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-534748",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "afb46bc1850ed06409fd349a7e0bed96881e8fe0ba7d5da0d4e10c753af025db",
"LowerDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b-init/diff:/var/lib/docker/overlay2/4778aebe962f337249ea4edb4aa75616b879ab446c0700a1de338f45632072a4/diff",
"MergedDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/merged",
"UpperDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/diff",
"WorkDir": "/var/lib/docker/overlay2/7d7115ce4389557980f6e8cdfb5888b231935714db6c9bb5fe39b9e5ad09a10b/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-534748",
"Source": "/var/lib/docker/volumes/functional-534748/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-534748",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-534748",
"name.minikube.sigs.k8s.io": "functional-534748",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "3110adc4cbcb3c834e173481804e433b3e0d0c32a77d5d828fb821433a717f76",
"SandboxKey": "/var/run/docker/netns/3110adc4cbcb",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33530"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33531"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33534"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33532"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33533"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-534748": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "ca:67:c6:ed:32:ee",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "5c6adbf364f07065a2170dca5c03ba4b1a3df833c656e117d5727d09797ca30e",
"EndpointID": "ba255b7075cbb83aa425533c0034d7778d609f47fa6442925eb6c8393edb0fa6",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-534748",
"afb46bc1850e"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-534748 -n functional-534748: exit status 6 (323.371666ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1210 06:31:39.116895 830272 status.go:458] kubeconfig endpoint: get endpoint: "functional-534748" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-534748 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-634209 ssh sudo umount -f /mount-9p │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ mount │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdspecific-port1205426619/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ │
│ ssh │ functional-634209 ssh findmnt -T /mount-9p | grep 9p │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ │
│ ssh │ functional-634209 ssh findmnt -T /mount-9p | grep 9p │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ ssh │ functional-634209 ssh -- ls -la /mount-9p │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ ssh │ functional-634209 ssh sudo umount -f /mount-9p │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ │
│ mount │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount1 --alsologtostderr -v=1 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ │
│ mount │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount2 --alsologtostderr -v=1 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ │
│ mount │ -p functional-634209 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2460956714/001:/mount3 --alsologtostderr -v=1 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ │
│ ssh │ functional-634209 ssh findmnt -T /mount1 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ ssh │ functional-634209 ssh findmnt -T /mount2 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ ssh │ functional-634209 ssh findmnt -T /mount3 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ mount │ -p functional-634209 --kill=true │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ │
│ update-context │ functional-634209 update-context --alsologtostderr -v=2 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ update-context │ functional-634209 update-context --alsologtostderr -v=2 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ update-context │ functional-634209 update-context --alsologtostderr -v=2 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ image │ functional-634209 image ls --format short --alsologtostderr │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ image │ functional-634209 image ls --format yaml --alsologtostderr │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ ssh │ functional-634209 ssh pgrep buildkitd │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ │
│ image │ functional-634209 image ls --format json --alsologtostderr │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ image │ functional-634209 image build -t localhost/my-image:functional-634209 testdata/build --alsologtostderr │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ image │ functional-634209 image ls --format table --alsologtostderr │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ image │ functional-634209 image ls │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ delete │ -p functional-634209 │ functional-634209 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ 10 Dec 25 06:23 UTC │
│ start │ -p functional-534748 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-534748 │ jenkins │ v1.37.0 │ 10 Dec 25 06:23 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/10 06:23:18
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1210 06:23:18.561811 824724 out.go:360] Setting OutFile to fd 1 ...
I1210 06:23:18.561934 824724 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:23:18.561938 824724 out.go:374] Setting ErrFile to fd 2...
I1210 06:23:18.561943 824724 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 06:23:18.562176 824724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-784887/.minikube/bin
I1210 06:23:18.562596 824724 out.go:368] Setting JSON to false
I1210 06:23:18.563405 824724 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":18323,"bootTime":1765329476,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1210 06:23:18.563461 824724 start.go:143] virtualization:
I1210 06:23:18.567966 824724 out.go:179] * [functional-534748] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1210 06:23:18.572615 824724 out.go:179] - MINIKUBE_LOCATION=22089
I1210 06:23:18.572738 824724 notify.go:221] Checking for updates...
I1210 06:23:18.579560 824724 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1210 06:23:18.582785 824724 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22089-784887/kubeconfig
I1210 06:23:18.585998 824724 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-784887/.minikube
I1210 06:23:18.589203 824724 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1210 06:23:18.592315 824724 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1210 06:23:18.595531 824724 driver.go:422] Setting default libvirt URI to qemu:///system
I1210 06:23:18.616494 824724 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1210 06:23:18.616614 824724 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 06:23:18.685828 824724 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 06:23:18.676798926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 06:23:18.685928 824724 docker.go:319] overlay module found
I1210 06:23:18.689262 824724 out.go:179] * Using the docker driver based on user configuration
I1210 06:23:18.692221 824724 start.go:309] selected driver: docker
I1210 06:23:18.692229 824724 start.go:927] validating driver "docker" against <nil>
I1210 06:23:18.692240 824724 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1210 06:23:18.692974 824724 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 06:23:18.746303 824724 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-10 06:23:18.736875636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 06:23:18.746448 824724 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1210 06:23:18.746748 824724 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1210 06:23:18.749798 824724 out.go:179] * Using Docker driver with root privileges
I1210 06:23:18.752644 824724 cni.go:84] Creating CNI manager for ""
I1210 06:23:18.752700 824724 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1210 06:23:18.752706 824724 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1210 06:23:18.752781 824724 start.go:353] cluster config:
{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1210 06:23:18.755854 824724 out.go:179] * Starting "functional-534748" primary control-plane node in "functional-534748" cluster
I1210 06:23:18.758672 824724 cache.go:134] Beginning downloading kic base image for docker with containerd
I1210 06:23:18.761562 824724 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
I1210 06:23:18.764465 824724 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1210 06:23:18.764502 824724 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
I1210 06:23:18.764509 824724 cache.go:65] Caching tarball of preloaded images
I1210 06:23:18.764515 824724 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
I1210 06:23:18.764600 824724 preload.go:238] Found /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1210 06:23:18.764609 824724 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
I1210 06:23:18.764951 824724 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
I1210 06:23:18.764969 824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json: {Name:mk60a55156bfb56daf7cb6bb30d194027be79f16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:23:18.783876 824724 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
I1210 06:23:18.783888 824724 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
I1210 06:23:18.783900 824724 cache.go:243] Successfully downloaded all kic artifacts
I1210 06:23:18.783937 824724 start.go:360] acquireMachinesLock for functional-534748: {Name:mkd9a3d78ae3a00b69c5be0f7badb099aea924eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1210 06:23:18.784041 824724 start.go:364] duration metric: took 90.307µs to acquireMachinesLock for "functional-534748"
I1210 06:23:18.784065 824724 start.go:93] Provisioning new machine with config: &{Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1210 06:23:18.784141 824724 start.go:125] createHost starting for "" (driver="docker")
I1210 06:23:18.787477 824724 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1210 06:23:18.787747 824724 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:46303 to docker env.
I1210 06:23:18.787771 824724 start.go:159] libmachine.API.Create for "functional-534748" (driver="docker")
I1210 06:23:18.787791 824724 client.go:173] LocalClient.Create starting
I1210 06:23:18.787850 824724 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem
I1210 06:23:18.787885 824724 main.go:143] libmachine: Decoding PEM data...
I1210 06:23:18.787900 824724 main.go:143] libmachine: Parsing certificate...
I1210 06:23:18.787948 824724 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem
I1210 06:23:18.787964 824724 main.go:143] libmachine: Decoding PEM data...
I1210 06:23:18.787975 824724 main.go:143] libmachine: Parsing certificate...
I1210 06:23:18.788340 824724 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1210 06:23:18.803245 824724 cli_runner.go:211] docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1210 06:23:18.803346 824724 network_create.go:284] running [docker network inspect functional-534748] to gather additional debugging logs...
I1210 06:23:18.803363 824724 cli_runner.go:164] Run: docker network inspect functional-534748
W1210 06:23:18.819409 824724 cli_runner.go:211] docker network inspect functional-534748 returned with exit code 1
I1210 06:23:18.819428 824724 network_create.go:287] error running [docker network inspect functional-534748]: docker network inspect functional-534748: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-534748 not found
I1210 06:23:18.819440 824724 network_create.go:289] output of [docker network inspect functional-534748]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-534748 not found
** /stderr **
I1210 06:23:18.819584 824724 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 06:23:18.836286 824724 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197a210}
I1210 06:23:18.836317 824724 network_create.go:124] attempt to create docker network functional-534748 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1210 06:23:18.836374 824724 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-534748 functional-534748
I1210 06:23:18.898735 824724 network_create.go:108] docker network functional-534748 192.168.49.0/24 created
I1210 06:23:18.898766 824724 kic.go:121] calculated static IP "192.168.49.2" for the "functional-534748" container
I1210 06:23:18.898840 824724 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1210 06:23:18.915016 824724 cli_runner.go:164] Run: docker volume create functional-534748 --label name.minikube.sigs.k8s.io=functional-534748 --label created_by.minikube.sigs.k8s.io=true
I1210 06:23:18.932547 824724 oci.go:103] Successfully created a docker volume functional-534748
I1210 06:23:18.932643 824724 cli_runner.go:164] Run: docker run --rm --name functional-534748-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-534748 --entrypoint /usr/bin/test -v functional-534748:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
I1210 06:23:19.518021 824724 oci.go:107] Successfully prepared a docker volume functional-534748
I1210 06:23:19.518071 824724 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1210 06:23:19.518079 824724 kic.go:194] Starting extracting preloaded images to volume ...
I1210 06:23:19.518148 824724 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-534748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
I1210 06:23:23.539706 824724 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-784887/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-534748:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (4.021522998s)
I1210 06:23:23.539729 824724 kic.go:203] duration metric: took 4.02164688s to extract preloaded images to volume ...
W1210 06:23:23.539870 824724 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1210 06:23:23.539977 824724 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1210 06:23:23.592919 824724 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-534748 --name functional-534748 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-534748 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-534748 --network functional-534748 --ip 192.168.49.2 --volume functional-534748:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
I1210 06:23:23.918744 824724 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Running}}
I1210 06:23:23.946412 824724 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
I1210 06:23:23.968713 824724 cli_runner.go:164] Run: docker exec functional-534748 stat /var/lib/dpkg/alternatives/iptables
I1210 06:23:24.021764 824724 oci.go:144] the created container "functional-534748" has a running status.
I1210 06:23:24.021783 824724 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa...
I1210 06:23:24.182440 824724 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1210 06:23:24.220260 824724 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
I1210 06:23:24.245276 824724 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1210 06:23:24.245287 824724 kic_runner.go:114] Args: [docker exec --privileged functional-534748 chown docker:docker /home/docker/.ssh/authorized_keys]
I1210 06:23:24.309824 824724 cli_runner.go:164] Run: docker container inspect functional-534748 --format={{.State.Status}}
I1210 06:23:24.336706 824724 machine.go:94] provisionDockerMachine start ...
I1210 06:23:24.336816 824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:23:24.370354 824724 main.go:143] libmachine: Using SSH client type: native
I1210 06:23:24.370926 824724 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33530 <nil> <nil>}
I1210 06:23:24.370948 824724 main.go:143] libmachine: About to run SSH command:
hostname
I1210 06:23:24.371869 824724 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1210 06:23:27.510298 824724 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
I1210 06:23:27.510313 824724 ubuntu.go:182] provisioning hostname "functional-534748"
I1210 06:23:27.510376 824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:23:27.527671 824724 main.go:143] libmachine: Using SSH client type: native
I1210 06:23:27.527979 824724 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33530 <nil> <nil>}
I1210 06:23:27.527988 824724 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-534748 && echo "functional-534748" | sudo tee /etc/hostname
I1210 06:23:27.671809 824724 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-534748
I1210 06:23:27.671887 824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:23:27.690829 824724 main.go:143] libmachine: Using SSH client type: native
I1210 06:23:27.691147 824724 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33530 <nil> <nil>}
I1210 06:23:27.691161 824724 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-534748' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-534748/g' /etc/hosts;
else
echo '127.0.1.1 functional-534748' | sudo tee -a /etc/hosts;
fi
fi
I1210 06:23:27.827212 824724 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1210 06:23:27.827230 824724 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-784887/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-784887/.minikube}
I1210 06:23:27.827263 824724 ubuntu.go:190] setting up certificates
I1210 06:23:27.827270 824724 provision.go:84] configureAuth start
I1210 06:23:27.827331 824724 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
I1210 06:23:27.844782 824724 provision.go:143] copyHostCerts
I1210 06:23:27.844841 824724 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem, removing ...
I1210 06:23:27.844849 824724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem
I1210 06:23:27.844927 824724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/cert.pem (1123 bytes)
I1210 06:23:27.845086 824724 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem, removing ...
I1210 06:23:27.845097 824724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem
I1210 06:23:27.845125 824724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/key.pem (1675 bytes)
I1210 06:23:27.845178 824724 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem, removing ...
I1210 06:23:27.845182 824724 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem
I1210 06:23:27.845208 824724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-784887/.minikube/ca.pem (1082 bytes)
I1210 06:23:27.845251 824724 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem org=jenkins.functional-534748 san=[127.0.0.1 192.168.49.2 functional-534748 localhost minikube]
I1210 06:23:28.092554 824724 provision.go:177] copyRemoteCerts
I1210 06:23:28.092615 824724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1210 06:23:28.092669 824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:23:28.111311 824724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
I1210 06:23:28.211137 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1210 06:23:28.229630 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1210 06:23:28.247857 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1210 06:23:28.265684 824724 provision.go:87] duration metric: took 438.390632ms to configureAuth
I1210 06:23:28.265700 824724 ubuntu.go:206] setting minikube options for container-runtime
I1210 06:23:28.265893 824724 config.go:182] Loaded profile config "functional-534748": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1210 06:23:28.265900 824724 machine.go:97] duration metric: took 3.929182228s to provisionDockerMachine
I1210 06:23:28.265906 824724 client.go:176] duration metric: took 9.478110735s to LocalClient.Create
I1210 06:23:28.265920 824724 start.go:167] duration metric: took 9.478150588s to libmachine.API.Create "functional-534748"
I1210 06:23:28.265925 824724 start.go:293] postStartSetup for "functional-534748" (driver="docker")
I1210 06:23:28.265935 824724 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1210 06:23:28.265985 824724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1210 06:23:28.266022 824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:23:28.283610 824724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
I1210 06:23:28.382841 824724 ssh_runner.go:195] Run: cat /etc/os-release
I1210 06:23:28.386415 824724 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1210 06:23:28.386433 824724 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1210 06:23:28.386445 824724 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/addons for local assets ...
I1210 06:23:28.386525 824724 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-784887/.minikube/files for local assets ...
I1210 06:23:28.386615 824724 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem -> 7867512.pem in /etc/ssl/certs
I1210 06:23:28.386699 824724 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts -> hosts in /etc/test/nested/copy/786751
I1210 06:23:28.386743 824724 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/786751
I1210 06:23:28.394793 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /etc/ssl/certs/7867512.pem (1708 bytes)
I1210 06:23:28.413551 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/test/nested/copy/786751/hosts --> /etc/test/nested/copy/786751/hosts (40 bytes)
I1210 06:23:28.432312 824724 start.go:296] duration metric: took 166.372215ms for postStartSetup
I1210 06:23:28.432697 824724 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
I1210 06:23:28.451224 824724 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/config.json ...
I1210 06:23:28.451545 824724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1210 06:23:28.451602 824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:23:28.472090 824724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
I1210 06:23:28.567977 824724 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1210 06:23:28.572788 824724 start.go:128] duration metric: took 9.788632154s to createHost
I1210 06:23:28.572804 824724 start.go:83] releasing machines lock for "functional-534748", held for 9.788754995s
I1210 06:23:28.572884 824724 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-534748
I1210 06:23:28.593611 824724 out.go:179] * Found network options:
I1210 06:23:28.596602 824724 out.go:179] - HTTP_PROXY=localhost:46303
W1210 06:23:28.599496 824724 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1210 06:23:28.602371 824724 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1210 06:23:28.605113 824724 ssh_runner.go:195] Run: cat /version.json
I1210 06:23:28.605165 824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:23:28.605177 824724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1210 06:23:28.605237 824724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-534748
I1210 06:23:28.629775 824724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
I1210 06:23:28.640272 824724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/22089-784887/.minikube/machines/functional-534748/id_rsa Username:docker}
I1210 06:23:28.730494 824724 ssh_runner.go:195] Run: systemctl --version
I1210 06:23:28.826265 824724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1210 06:23:28.830807 824724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1210 06:23:28.830871 824724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1210 06:23:28.858711 824724 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1210 06:23:28.858741 824724 start.go:496] detecting cgroup driver to use...
I1210 06:23:28.858775 824724 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1210 06:23:28.858828 824724 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1210 06:23:28.875395 824724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1210 06:23:28.889278 824724 docker.go:218] disabling cri-docker service (if available) ...
I1210 06:23:28.889348 824724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1210 06:23:28.907488 824724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1210 06:23:28.926002 824724 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1210 06:23:29.053665 824724 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1210 06:23:29.177719 824724 docker.go:234] disabling docker service ...
I1210 06:23:29.177783 824724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1210 06:23:29.201711 824724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1210 06:23:29.216552 824724 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1210 06:23:29.341854 824724 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1210 06:23:29.472954 824724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1210 06:23:29.485915 824724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1210 06:23:29.500245 824724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1210 06:23:29.509024 824724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1210 06:23:29.518257 824724 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1210 06:23:29.518332 824724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1210 06:23:29.527159 824724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1210 06:23:29.535968 824724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1210 06:23:29.544602 824724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1210 06:23:29.553251 824724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1210 06:23:29.561523 824724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1210 06:23:29.570558 824724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1210 06:23:29.579030 824724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1210 06:23:29.588073 824724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1210 06:23:29.595737 824724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1210 06:23:29.603096 824724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1210 06:23:29.719604 824724 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1210 06:23:29.857543 824724 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1210 06:23:29.857607 824724 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1210 06:23:29.861945 824724 start.go:564] Will wait 60s for crictl version
I1210 06:23:29.862002 824724 ssh_runner.go:195] Run: which crictl
I1210 06:23:29.865723 824724 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1210 06:23:29.896288 824724 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1210 06:23:29.896349 824724 ssh_runner.go:195] Run: containerd --version
I1210 06:23:29.916811 824724 ssh_runner.go:195] Run: containerd --version
I1210 06:23:29.941490 824724 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
I1210 06:23:29.944391 824724 cli_runner.go:164] Run: docker network inspect functional-534748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 06:23:29.960572 824724 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1210 06:23:29.964489 824724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1210 06:23:29.974386 824724 kubeadm.go:884] updating cluster {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1210 06:23:29.974570 824724 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1210 06:23:29.974644 824724 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 06:23:29.999694 824724 containerd.go:627] all images are preloaded for containerd runtime.
I1210 06:23:29.999706 824724 containerd.go:534] Images already preloaded, skipping extraction
I1210 06:23:29.999767 824724 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 06:23:30.037219 824724 containerd.go:627] all images are preloaded for containerd runtime.
I1210 06:23:30.037233 824724 cache_images.go:86] Images are preloaded, skipping loading
I1210 06:23:30.037240 824724 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1210 06:23:30.037354 824724 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-534748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1210 06:23:30.037435 824724 ssh_runner.go:195] Run: sudo crictl info
I1210 06:23:30.075940 824724 cni.go:84] Creating CNI manager for ""
I1210 06:23:30.075952 824724 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1210 06:23:30.075976 824724 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1210 06:23:30.075999 824724 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-534748 NodeName:functional-534748 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1210 06:23:30.076131 824724 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-534748"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1210 06:23:30.076210 824724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1210 06:23:30.086272 824724 binaries.go:51] Found k8s binaries, skipping transfer
I1210 06:23:30.086344 824724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1210 06:23:30.095832 824724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1210 06:23:30.111095 824724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1210 06:23:30.125725 824724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1210 06:23:30.140343 824724 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1210 06:23:30.144388 824724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1210 06:23:30.155306 824724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1210 06:23:30.272267 824724 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1210 06:23:30.289369 824724 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748 for IP: 192.168.49.2
I1210 06:23:30.289380 824724 certs.go:195] generating shared ca certs ...
I1210 06:23:30.289407 824724 certs.go:227] acquiring lock for ca certs: {Name:mkcb7ff04cbcc0d76ccfcd6220476bfbfaf189ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:23:30.289569 824724 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key
I1210 06:23:30.289628 824724 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key
I1210 06:23:30.289635 824724 certs.go:257] generating profile certs ...
I1210 06:23:30.289702 824724 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key
I1210 06:23:30.289713 824724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt with IP's: []
I1210 06:23:30.577813 824724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt ...
I1210 06:23:30.577830 824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.crt: {Name:mk182e2de3a6255438833644eab98673931582c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:23:30.578053 824724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key ...
I1210 06:23:30.578060 824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/client.key: {Name:mk775339eb0119e3f53731683334a2bf251dfdc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:23:30.578160 824724 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f
I1210 06:23:30.578171 824724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt.7cb3dc2f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1210 06:23:30.705071 824724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt.7cb3dc2f ...
I1210 06:23:30.705088 824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt.7cb3dc2f: {Name:mk915863deb06984ded66016408409304916e860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:23:30.705280 824724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f ...
I1210 06:23:30.705288 824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f: {Name:mk558cd656fe759f628c9df8b2c6b8157bf7257c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:23:30.705377 824724 certs.go:382] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt.7cb3dc2f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt
I1210 06:23:30.705459 824724 certs.go:386] copying /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key.7cb3dc2f -> /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key
I1210 06:23:30.705530 824724 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key
I1210 06:23:30.705545 824724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt with IP's: []
I1210 06:23:30.822321 824724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt ...
I1210 06:23:30.822338 824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt: {Name:mk49c9b79acd5b2da0c0a9e737eef381494c2c27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:23:30.822549 824724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key ...
I1210 06:23:30.822557 824724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key: {Name:mk1cb30bcf4253db4762bdd181e4e7acf1302f2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 06:23:30.822782 824724 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem (1338 bytes)
W1210 06:23:30.822829 824724 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751_empty.pem, impossibly tiny 0 bytes
I1210 06:23:30.822837 824724 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca-key.pem (1675 bytes)
I1210 06:23:30.822862 824724 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/ca.pem (1082 bytes)
I1210 06:23:30.822891 824724 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/cert.pem (1123 bytes)
I1210 06:23:30.822913 824724 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/certs/key.pem (1675 bytes)
I1210 06:23:30.822957 824724 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem (1708 bytes)
I1210 06:23:30.823552 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1210 06:23:30.843034 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1210 06:23:30.862309 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1210 06:23:30.881205 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1210 06:23:30.899622 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1210 06:23:30.917571 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1210 06:23:30.936094 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1210 06:23:30.954214 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/profiles/functional-534748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1210 06:23:30.972466 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/files/etc/ssl/certs/7867512.pem --> /usr/share/ca-certificates/7867512.pem (1708 bytes)
I1210 06:23:30.990212 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1210 06:23:31.009829 824724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-784887/.minikube/certs/786751.pem --> /usr/share/ca-certificates/786751.pem (1338 bytes)
I1210 06:23:31.028583 824724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1210 06:23:31.041431 824724 ssh_runner.go:195] Run: openssl version
I1210 06:23:31.050524 824724 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/786751.pem
I1210 06:23:31.058272 824724 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/786751.pem /etc/ssl/certs/786751.pem
I1210 06:23:31.066301 824724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/786751.pem
I1210 06:23:31.070888 824724 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 06:23 /usr/share/ca-certificates/786751.pem
I1210 06:23:31.070945 824724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/786751.pem
I1210 06:23:31.116432 824724 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1210 06:23:31.124295 824724 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/786751.pem /etc/ssl/certs/51391683.0
I1210 06:23:31.132379 824724 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7867512.pem
I1210 06:23:31.141114 824724 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7867512.pem /etc/ssl/certs/7867512.pem
I1210 06:23:31.149283 824724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7867512.pem
I1210 06:23:31.153297 824724 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 06:23 /usr/share/ca-certificates/7867512.pem
I1210 06:23:31.153355 824724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7867512.pem
I1210 06:23:31.194916 824724 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1210 06:23:31.202267 824724 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7867512.pem /etc/ssl/certs/3ec20f2e.0
I1210 06:23:31.209669 824724 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1210 06:23:31.217299 824724 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1210 06:23:31.225148 824724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1210 06:23:31.228942 824724 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 06:13 /usr/share/ca-certificates/minikubeCA.pem
I1210 06:23:31.228999 824724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1210 06:23:31.274881 824724 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1210 06:23:31.282499 824724 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1210 06:23:31.289942 824724 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1210 06:23:31.293526 824724 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1210 06:23:31.293570 824724 kubeadm.go:401] StartCluster: {Name:functional-534748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-534748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1210 06:23:31.293643 824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1210 06:23:31.293702 824724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1210 06:23:31.321141 824724 cri.go:89] found id: ""
I1210 06:23:31.321205 824724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1210 06:23:31.329010 824724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1210 06:23:31.336715 824724 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1210 06:23:31.336796 824724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1210 06:23:31.344526 824724 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1210 06:23:31.344537 824724 kubeadm.go:158] found existing configuration files:
I1210 06:23:31.344591 824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1210 06:23:31.352424 824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1210 06:23:31.352480 824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1210 06:23:31.359796 824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1210 06:23:31.367596 824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1210 06:23:31.367662 824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1210 06:23:31.375285 824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1210 06:23:31.383343 824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1210 06:23:31.383409 824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1210 06:23:31.390955 824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1210 06:23:31.398664 824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1210 06:23:31.398721 824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1210 06:23:31.406242 824724 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1210 06:23:31.453829 824724 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1210 06:23:31.453880 824724 kubeadm.go:319] [preflight] Running pre-flight checks
I1210 06:23:31.534875 824724 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1210 06:23:31.534944 824724 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1210 06:23:31.534979 824724 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1210 06:23:31.535022 824724 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1210 06:23:31.535069 824724 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1210 06:23:31.535115 824724 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1210 06:23:31.535161 824724 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1210 06:23:31.535208 824724 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1210 06:23:31.535257 824724 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1210 06:23:31.535300 824724 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1210 06:23:31.535347 824724 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1210 06:23:31.535392 824724 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1210 06:23:31.602248 824724 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1210 06:23:31.602379 824724 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1210 06:23:31.602496 824724 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1210 06:23:31.610822 824724 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1210 06:23:31.617037 824724 out.go:252] - Generating certificates and keys ...
I1210 06:23:31.617136 824724 kubeadm.go:319] [certs] Using existing ca certificate authority
I1210 06:23:31.617201 824724 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1210 06:23:32.164365 824724 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1210 06:23:32.400510 824724 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1210 06:23:32.637786 824724 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1210 06:23:32.828842 824724 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1210 06:23:33.099528 824724 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1210 06:23:33.099825 824724 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-534748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1210 06:23:33.371416 824724 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1210 06:23:33.371713 824724 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-534748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1210 06:23:33.628997 824724 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1210 06:23:33.868348 824724 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1210 06:23:34.405704 824724 kubeadm.go:319] [certs] Generating "sa" key and public key
I1210 06:23:34.405768 824724 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1210 06:23:34.690708 824724 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1210 06:23:35.224487 824724 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1210 06:23:35.300387 824724 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1210 06:23:35.503970 824724 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1210 06:23:35.789168 824724 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1210 06:23:35.789772 824724 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1210 06:23:35.793039 824724 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1210 06:23:35.796572 824724 out.go:252] - Booting up control plane ...
I1210 06:23:35.796665 824724 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1210 06:23:35.796742 824724 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1210 06:23:35.797250 824724 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1210 06:23:35.813928 824724 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1210 06:23:35.814030 824724 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1210 06:23:35.822221 824724 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1210 06:23:35.822490 824724 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1210 06:23:35.822695 824724 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1210 06:23:35.959601 824724 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1210 06:23:35.959714 824724 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1210 06:27:35.959280 824724 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001121505s
I1210 06:27:35.959314 824724 kubeadm.go:319]
I1210 06:27:35.959376 824724 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1210 06:27:35.959412 824724 kubeadm.go:319] - The kubelet is not running
I1210 06:27:35.959525 824724 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1210 06:27:35.959530 824724 kubeadm.go:319]
I1210 06:27:35.959645 824724 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1210 06:27:35.959682 824724 kubeadm.go:319] - 'systemctl status kubelet'
I1210 06:27:35.959718 824724 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1210 06:27:35.959721 824724 kubeadm.go:319]
I1210 06:27:35.964211 824724 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1210 06:27:35.964697 824724 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1210 06:27:35.964835 824724 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1210 06:27:35.965086 824724 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1210 06:27:35.965096 824724 kubeadm.go:319]
I1210 06:27:35.965197 824724 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1210 06:27:35.965301 824724 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-534748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-534748 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001121505s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1210 06:27:35.965400 824724 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1210 06:27:36.375968 824724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1210 06:27:36.389422 824724 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1210 06:27:36.389482 824724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1210 06:27:36.397243 824724 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1210 06:27:36.397253 824724 kubeadm.go:158] found existing configuration files:
I1210 06:27:36.397305 824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1210 06:27:36.405114 824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1210 06:27:36.405171 824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1210 06:27:36.412753 824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1210 06:27:36.420490 824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1210 06:27:36.420545 824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1210 06:27:36.428126 824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1210 06:27:36.436293 824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1210 06:27:36.436352 824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1210 06:27:36.443908 824724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1210 06:27:36.451976 824724 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1210 06:27:36.452034 824724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1210 06:27:36.459851 824724 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1210 06:27:36.498112 824724 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1210 06:27:36.498163 824724 kubeadm.go:319] [preflight] Running pre-flight checks
I1210 06:27:36.573055 824724 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1210 06:27:36.573144 824724 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1210 06:27:36.573204 824724 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1210 06:27:36.573260 824724 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1210 06:27:36.573308 824724 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1210 06:27:36.573369 824724 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1210 06:27:36.573425 824724 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1210 06:27:36.573481 824724 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1210 06:27:36.573548 824724 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1210 06:27:36.573592 824724 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1210 06:27:36.573648 824724 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1210 06:27:36.573702 824724 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1210 06:27:36.643977 824724 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1210 06:27:36.644081 824724 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1210 06:27:36.644202 824724 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1210 06:27:36.651002 824724 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1210 06:27:36.656480 824724 out.go:252] - Generating certificates and keys ...
I1210 06:27:36.656581 824724 kubeadm.go:319] [certs] Using existing ca certificate authority
I1210 06:27:36.656659 824724 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1210 06:27:36.656760 824724 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1210 06:27:36.656824 824724 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1210 06:27:36.656899 824724 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1210 06:27:36.656956 824724 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1210 06:27:36.657022 824724 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1210 06:27:36.657087 824724 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1210 06:27:36.657166 824724 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1210 06:27:36.657242 824724 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1210 06:27:36.657282 824724 kubeadm.go:319] [certs] Using the existing "sa" key
I1210 06:27:36.657347 824724 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1210 06:27:37.043000 824724 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1210 06:27:37.557603 824724 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1210 06:27:37.836966 824724 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1210 06:27:37.930755 824724 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1210 06:27:38.179355 824724 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1210 06:27:38.180126 824724 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1210 06:27:38.182784 824724 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1210 06:27:38.186129 824724 out.go:252] - Booting up control plane ...
I1210 06:27:38.186236 824724 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1210 06:27:38.186313 824724 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1210 06:27:38.186379 824724 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1210 06:27:38.206601 824724 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1210 06:27:38.206701 824724 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1210 06:27:38.214027 824724 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1210 06:27:38.214319 824724 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1210 06:27:38.214521 824724 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1210 06:27:38.356052 824724 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1210 06:27:38.356165 824724 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1210 06:31:38.356742 824724 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00106645s
I1210 06:31:38.356763 824724 kubeadm.go:319]
I1210 06:31:38.356817 824724 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1210 06:31:38.356847 824724 kubeadm.go:319] - The kubelet is not running
I1210 06:31:38.357052 824724 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1210 06:31:38.357057 824724 kubeadm.go:319]
I1210 06:31:38.357161 824724 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1210 06:31:38.357190 824724 kubeadm.go:319] - 'systemctl status kubelet'
I1210 06:31:38.357219 824724 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1210 06:31:38.357221 824724 kubeadm.go:319]
I1210 06:31:38.361679 824724 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1210 06:31:38.362132 824724 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1210 06:31:38.362236 824724 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1210 06:31:38.362471 824724 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1210 06:31:38.362478 824724 kubeadm.go:319]
I1210 06:31:38.362566 824724 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1210 06:31:38.362606 824724 kubeadm.go:403] duration metric: took 8m7.069039681s to StartCluster
I1210 06:31:38.362655 824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1210 06:31:38.362721 824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1210 06:31:38.387152 824724 cri.go:89] found id: ""
I1210 06:31:38.387182 824724 logs.go:282] 0 containers: []
W1210 06:31:38.387188 824724 logs.go:284] No container was found matching "kube-apiserver"
I1210 06:31:38.387193 824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1210 06:31:38.387251 824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1210 06:31:38.411100 824724 cri.go:89] found id: ""
I1210 06:31:38.411115 824724 logs.go:282] 0 containers: []
W1210 06:31:38.411121 824724 logs.go:284] No container was found matching "etcd"
I1210 06:31:38.411126 824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1210 06:31:38.411184 824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1210 06:31:38.435309 824724 cri.go:89] found id: ""
I1210 06:31:38.435322 824724 logs.go:282] 0 containers: []
W1210 06:31:38.435329 824724 logs.go:284] No container was found matching "coredns"
I1210 06:31:38.435334 824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1210 06:31:38.435399 824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1210 06:31:38.460192 824724 cri.go:89] found id: ""
I1210 06:31:38.460205 824724 logs.go:282] 0 containers: []
W1210 06:31:38.460212 824724 logs.go:284] No container was found matching "kube-scheduler"
I1210 06:31:38.460217 824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1210 06:31:38.460276 824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1210 06:31:38.484680 824724 cri.go:89] found id: ""
I1210 06:31:38.484695 824724 logs.go:282] 0 containers: []
W1210 06:31:38.484701 824724 logs.go:284] No container was found matching "kube-proxy"
I1210 06:31:38.484706 824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1210 06:31:38.484766 824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1210 06:31:38.508595 824724 cri.go:89] found id: ""
I1210 06:31:38.508608 824724 logs.go:282] 0 containers: []
W1210 06:31:38.508615 824724 logs.go:284] No container was found matching "kube-controller-manager"
I1210 06:31:38.508621 824724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1210 06:31:38.508680 824724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1210 06:31:38.534537 824724 cri.go:89] found id: ""
I1210 06:31:38.534551 824724 logs.go:282] 0 containers: []
W1210 06:31:38.534558 824724 logs.go:284] No container was found matching "kindnet"
I1210 06:31:38.534567 824724 logs.go:123] Gathering logs for dmesg ...
I1210 06:31:38.534579 824724 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1210 06:31:38.551392 824724 logs.go:123] Gathering logs for describe nodes ...
I1210 06:31:38.551409 824724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1210 06:31:38.618849 824724 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1210 06:31:38.605890 4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1210 06:31:38.611125 4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1210 06:31:38.611837 4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1210 06:31:38.613473 4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1210 06:31:38.613796 4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1210 06:31:38.605890 4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1210 06:31:38.611125 4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1210 06:31:38.611837 4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1210 06:31:38.613473 4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1210 06:31:38.613796 4771 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1210 06:31:38.618860 824724 logs.go:123] Gathering logs for containerd ...
I1210 06:31:38.618873 824724 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1210 06:31:38.659217 824724 logs.go:123] Gathering logs for container status ...
I1210 06:31:38.659242 824724 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1210 06:31:38.686990 824724 logs.go:123] Gathering logs for kubelet ...
I1210 06:31:38.687005 824724 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1210 06:31:38.744286 824724 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00106645s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1210 06:31:38.744331 824724 out.go:285] *
W1210 06:31:38.744399 824724 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00106645s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1210 06:31:38.744421 824724 out.go:285] *
W1210 06:31:38.746553 824724 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1210 06:31:38.752408 824724 out.go:203]
W1210 06:31:38.755993 824724 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00106645s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1210 06:31:38.756041 824724 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1210 06:31:38.756061 824724 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1210 06:31:38.759668 824724 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804018086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804089931Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804203434Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804280793Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804381914Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804455885Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804517473Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804616461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804709295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.804815496Z" level=info msg="Connect containerd service"
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.806335967Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.807027307Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.816456518Z" level=info msg="Start subscribing containerd event"
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.816675745Z" level=info msg="Start recovering state"
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.816681612Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.816908003Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854717261Z" level=info msg="Start event monitor"
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854769307Z" level=info msg="Start cni network conf syncer for default"
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854779908Z" level=info msg="Start streaming server"
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854789911Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854798346Z" level=info msg="runtime interface starting up..."
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854804549Z" level=info msg="starting plugins..."
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.854816651Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 10 06:23:29 functional-534748 systemd[1]: Started containerd.service - containerd container runtime.
Dec 10 06:23:29 functional-534748 containerd[764]: time="2025-12-10T06:23:29.858775013Z" level=info msg="containerd successfully booted in 0.076971s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1210 06:31:39.733433 4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1210 06:31:39.733867 4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1210 06:31:39.735362 4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1210 06:31:39.735759 4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1210 06:31:39.737222 4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec10 05:20] overlayfs: idmapped layers are currently not supported
[ +2.735648] overlayfs: idmapped layers are currently not supported
[Dec10 05:21] overlayfs: idmapped layers are currently not supported
[ +24.110991] overlayfs: idmapped layers are currently not supported
[Dec10 05:22] overlayfs: idmapped layers are currently not supported
[ +24.761042] overlayfs: idmapped layers are currently not supported
[Dec10 05:23] overlayfs: idmapped layers are currently not supported
[Dec10 05:25] overlayfs: idmapped layers are currently not supported
[Dec10 05:27] overlayfs: idmapped layers are currently not supported
[ +0.867763] overlayfs: idmapped layers are currently not supported
[Dec10 05:29] overlayfs: idmapped layers are currently not supported
[Dec10 05:40] overlayfs: idmapped layers are currently not supported
[Dec10 05:41] overlayfs: idmapped layers are currently not supported
[Dec10 05:42] overlayfs: idmapped layers are currently not supported
[ +24.057374] overlayfs: idmapped layers are currently not supported
[Dec10 05:43] overlayfs: idmapped layers are currently not supported
[Dec10 05:44] overlayfs: idmapped layers are currently not supported
[Dec10 05:45] overlayfs: idmapped layers are currently not supported
[Dec10 05:46] overlayfs: idmapped layers are currently not supported
[Dec10 05:47] overlayfs: idmapped layers are currently not supported
[Dec10 05:48] overlayfs: idmapped layers are currently not supported
[Dec10 05:50] overlayfs: idmapped layers are currently not supported
[Dec10 06:08] overlayfs: idmapped layers are currently not supported
[Dec10 06:09] overlayfs: idmapped layers are currently not supported
[Dec10 06:11] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
06:31:39 up 5:13, 0 user, load average: 0.12, 0.47, 1.07
Linux functional-534748 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 10 06:31:36 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 10 06:31:37 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 10 06:31:37 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 10 06:31:37 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 10 06:31:37 functional-534748 kubelet[4699]: E1210 06:31:37.337790 4699 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 10 06:31:37 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 06:31:37 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 10 06:31:38 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 10 06:31:38 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 10 06:31:38 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 10 06:31:38 functional-534748 kubelet[4704]: E1210 06:31:38.090975 4704 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 10 06:31:38 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 06:31:38 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 10 06:31:38 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 10 06:31:38 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 10 06:31:38 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 10 06:31:38 functional-534748 kubelet[4791]: E1210 06:31:38.865486 4791 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 10 06:31:38 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 06:31:38 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 10 06:31:39 functional-534748 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 10 06:31:39 functional-534748 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 10 06:31:39 functional-534748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 10 06:31:39 functional-534748 kubelet[4855]: E1210 06:31:39.619203 4855 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 10 06:31:39 functional-534748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 10 06:31:39 functional-534748 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-534748 -n functional-534748: exit status 6 (358.166138ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1210 06:31:40.215114 830486 status.go:458] kubeconfig endpoint: get endpoint: "functional-534748" does not appear in /home/jenkins/minikube-integration/22089-784887/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-534748" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.72s)