=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 10:22:56.110881 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:12.241115 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:25:39.953835 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.083780 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.090181 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.101659 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.123210 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.164688 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.246280 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.407806 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:48.729538 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:49.371660 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:50.653380 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:53.218792 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:26:58.340647 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:27:08.582159 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:27:29.063620 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:28:10.025144 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:29:31.950182 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-319494/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 10:30:12.241078 308915 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/addons-672850/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m21.741312291s)
-- stdout --
* [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22127
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-652709" primary control-plane node in "functional-652709" cluster
* Pulling base image v0.0.48-1765275396-22083 ...
* Found network options:
- HTTP_PROXY=localhost:46303
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:46303 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-652709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-652709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001296773s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000358804s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000358804s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-652709
helpers_test.go:244: (dbg) docker inspect functional-652709:
-- stdout --
[
{
"Id": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
"Created": "2025-12-13T10:22:44.366993781Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 347931,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-13T10:22:44.437030763Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
"ResolvConfPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hostname",
"HostsPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/hosts",
"LogPath": "/var/lib/docker/containers/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f/0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f-json.log",
"Name": "/functional-652709",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-652709:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-652709",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "0f6101071ca209986b2d8c08966199247968676a1dba0d6ab5eab42c14a09e0f",
"LowerDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095-init/diff:/var/lib/docker/overlay2/5d0ca86320f2c06765995d644a73af30cd6ddb36f5c1a2a6b1ebb24af53cdabd/diff",
"MergedDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/merged",
"UpperDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/diff",
"WorkDir": "/var/lib/docker/overlay2/11aac50dc0e7e386bac672497d4f87e3668eea017aec837629dcc6ce4bb57095/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-652709",
"Source": "/var/lib/docker/volumes/functional-652709/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-652709",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-652709",
"name.minikube.sigs.k8s.io": "functional-652709",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "52e527b5bd789a02eb7efb651200033ed4929e5fc7545e9df042d3f777cc9782",
"SandboxKey": "/var/run/docker/netns/52e527b5bd78",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33125"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33126"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33129"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33127"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33128"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-652709": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "5a:23:08:9e:cb:13",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "344f2b940117dadb28d1ef1328f911c0446307288fdfafebfe59f38e473f79cb",
"EndpointID": "8954f96e5987202be5715e7023384fe862744778b2520bccba28c57814f0980f",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-652709",
"0f6101071ca2"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-652709 -n functional-652709: exit status 6 (350.281966ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1213 10:31:01.479874 353104 status.go:458] kubeconfig endpoint: get endpoint: "functional-652709" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-652709 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ addons │ addons-672850 addons disable ingress --alsologtostderr -v=1 │ addons-672850 │ jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:17 UTC │
│ stop │ -p addons-672850 │ addons-672850 │ jenkins │ v1.37.0 │ 13 Dec 25 10:17 UTC │ 13 Dec 25 10:18 UTC │
│ addons │ enable dashboard -p addons-672850 │ addons-672850 │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
│ addons │ disable dashboard -p addons-672850 │ addons-672850 │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
│ addons │ disable gvisor -p addons-672850 │ addons-672850 │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
│ delete │ -p addons-672850 │ addons-672850 │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
│ start │ -p dockerenv-403574 --driver=docker --container-runtime=containerd │ dockerenv-403574 │ jenkins │ v1.37.0 │ 13 Dec 25 10:18 UTC │ 13 Dec 25 10:18 UTC │
│ docker-env │ --ssh-host --ssh-add -p dockerenv-403574 │ dockerenv-403574 │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
│ delete │ -p dockerenv-403574 │ dockerenv-403574 │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
│ start │ -p nospam-462625 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-462625 --driver=docker --container-runtime=containerd │ nospam-462625 │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
│ start │ nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run │ nospam-462625 │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ │
│ start │ nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run │ nospam-462625 │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ │
│ start │ nospam-462625 --log_dir /tmp/nospam-462625 start --dry-run │ nospam-462625 │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ │
│ pause │ nospam-462625 --log_dir /tmp/nospam-462625 pause │ nospam-462625 │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
│ pause │ nospam-462625 --log_dir /tmp/nospam-462625 pause │ nospam-462625 │ jenkins │ v1.37.0 │ 13 Dec 25 10:19 UTC │ 13 Dec 25 10:19 UTC │
│ update-context │ functional-319494 update-context --alsologtostderr -v=2 │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
│ image │ functional-319494 image ls --format short --alsologtostderr │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
│ image │ functional-319494 image ls --format yaml --alsologtostderr │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
│ ssh │ functional-319494 ssh pgrep buildkitd │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ │
│ image │ functional-319494 image ls --format json --alsologtostderr │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
│ image │ functional-319494 image build -t localhost/my-image:functional-319494 testdata/build --alsologtostderr │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
│ image │ functional-319494 image ls --format table --alsologtostderr │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
│ image │ functional-319494 image ls │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
│ delete │ -p functional-319494 │ functional-319494 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ 13 Dec 25 10:22 UTC │
│ start │ -p functional-652709 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-652709 │ jenkins │ v1.37.0 │ 13 Dec 25 10:22 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/13 10:22:39
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1213 10:22:39.421960 347534 out.go:360] Setting OutFile to fd 1 ...
I1213 10:22:39.422061 347534 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:22:39.422065 347534 out.go:374] Setting ErrFile to fd 2...
I1213 10:22:39.422069 347534 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 10:22:39.422314 347534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22127-307042/.minikube/bin
I1213 10:22:39.422757 347534 out.go:368] Setting JSON to false
I1213 10:22:39.423550 347534 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11112,"bootTime":1765610247,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1213 10:22:39.423602 347534 start.go:143] virtualization:
I1213 10:22:39.427949 347534 out.go:179] * [functional-652709] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1213 10:22:39.432335 347534 out.go:179] - MINIKUBE_LOCATION=22127
I1213 10:22:39.432462 347534 notify.go:221] Checking for updates...
I1213 10:22:39.439559 347534 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1213 10:22:39.442888 347534 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22127-307042/kubeconfig
I1213 10:22:39.446129 347534 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22127-307042/.minikube
I1213 10:22:39.449270 347534 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1213 10:22:39.452443 347534 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1213 10:22:39.455730 347534 driver.go:422] Setting default libvirt URI to qemu:///system
I1213 10:22:39.489393 347534 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1213 10:22:39.489505 347534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 10:22:39.543781 347534 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 10:22:39.533679294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 10:22:39.543875 347534 docker.go:319] overlay module found
I1213 10:22:39.547092 347534 out.go:179] * Using the docker driver based on user configuration
I1213 10:22:39.550078 347534 start.go:309] selected driver: docker
I1213 10:22:39.550085 347534 start.go:927] validating driver "docker" against <nil>
I1213 10:22:39.550097 347534 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1213 10:22:39.550915 347534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 10:22:39.622622 347534 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 10:22:39.610875453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 10:22:39.622857 347534 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1213 10:22:39.623165 347534 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1213 10:22:39.626177 347534 out.go:179] * Using Docker driver with root privileges
I1213 10:22:39.629190 347534 cni.go:84] Creating CNI manager for ""
I1213 10:22:39.629268 347534 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1213 10:22:39.629279 347534 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1213 10:22:39.629390 347534 start.go:353] cluster config:
{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 10:22:39.632752 347534 out.go:179] * Starting "functional-652709" primary control-plane node in "functional-652709" cluster
I1213 10:22:39.635589 347534 cache.go:134] Beginning downloading kic base image for docker with containerd
I1213 10:22:39.638791 347534 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
I1213 10:22:39.641667 347534 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 10:22:39.641710 347534 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
I1213 10:22:39.641720 347534 cache.go:65] Caching tarball of preloaded images
I1213 10:22:39.641744 347534 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
I1213 10:22:39.641829 347534 preload.go:238] Found /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1213 10:22:39.641839 347534 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
I1213 10:22:39.642240 347534 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
I1213 10:22:39.642267 347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json: {Name:mkdd6dba0d583de35ce43823020b0dfb44a1a137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 10:22:39.662516 347534 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
I1213 10:22:39.662527 347534 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
I1213 10:22:39.662545 347534 cache.go:243] Successfully downloaded all kic artifacts
I1213 10:22:39.662579 347534 start.go:360] acquireMachinesLock for functional-652709: {Name:mk6e8c40fbbb5af0bb2468340fd710875030300d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 10:22:39.662677 347534 start.go:364] duration metric: took 84.768µs to acquireMachinesLock for "functional-652709"
I1213 10:22:39.662727 347534 start.go:93] Provisioning new machine with config: &{Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1213 10:22:39.662836 347534 start.go:125] createHost starting for "" (driver="docker")
I1213 10:22:39.666292 347534 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1213 10:22:39.666632 347534 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:46303 to docker env.
I1213 10:22:39.666659 347534 start.go:159] libmachine.API.Create for "functional-652709" (driver="docker")
I1213 10:22:39.666715 347534 client.go:173] LocalClient.Create starting
I1213 10:22:39.666828 347534 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem
I1213 10:22:39.666870 347534 main.go:143] libmachine: Decoding PEM data...
I1213 10:22:39.666898 347534 main.go:143] libmachine: Parsing certificate...
I1213 10:22:39.666965 347534 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem
I1213 10:22:39.666984 347534 main.go:143] libmachine: Decoding PEM data...
I1213 10:22:39.666994 347534 main.go:143] libmachine: Parsing certificate...
I1213 10:22:39.667417 347534 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 10:22:39.684615 347534 cli_runner.go:211] docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 10:22:39.684708 347534 network_create.go:284] running [docker network inspect functional-652709] to gather additional debugging logs...
I1213 10:22:39.684725 347534 cli_runner.go:164] Run: docker network inspect functional-652709
W1213 10:22:39.700381 347534 cli_runner.go:211] docker network inspect functional-652709 returned with exit code 1
I1213 10:22:39.700411 347534 network_create.go:287] error running [docker network inspect functional-652709]: docker network inspect functional-652709: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-652709 not found
I1213 10:22:39.700425 347534 network_create.go:289] output of [docker network inspect functional-652709]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-652709 not found
** /stderr **
I1213 10:22:39.700522 347534 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 10:22:39.717552 347534 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400194d540}
I1213 10:22:39.717582 347534 network_create.go:124] attempt to create docker network functional-652709 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1213 10:22:39.717644 347534 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-652709 functional-652709
I1213 10:22:39.786135 347534 network_create.go:108] docker network functional-652709 192.168.49.0/24 created
I1213 10:22:39.786166 347534 kic.go:121] calculated static IP "192.168.49.2" for the "functional-652709" container
I1213 10:22:39.786258 347534 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1213 10:22:39.801500 347534 cli_runner.go:164] Run: docker volume create functional-652709 --label name.minikube.sigs.k8s.io=functional-652709 --label created_by.minikube.sigs.k8s.io=true
I1213 10:22:39.819680 347534 oci.go:103] Successfully created a docker volume functional-652709
I1213 10:22:39.819773 347534 cli_runner.go:164] Run: docker run --rm --name functional-652709-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-652709 --entrypoint /usr/bin/test -v functional-652709:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
I1213 10:22:40.392339 347534 oci.go:107] Successfully prepared a docker volume functional-652709
I1213 10:22:40.392403 347534 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 10:22:40.392410 347534 kic.go:194] Starting extracting preloaded images to volume ...
I1213 10:22:40.392484 347534 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-652709:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
I1213 10:22:44.282605 347534 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22127-307042/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-652709:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (3.890086063s)
I1213 10:22:44.282627 347534 kic.go:203] duration metric: took 3.890213564s to extract preloaded images to volume ...
W1213 10:22:44.282821 347534 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1213 10:22:44.282918 347534 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1213 10:22:44.352211 347534 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-652709 --name functional-652709 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-652709 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-652709 --network functional-652709 --ip 192.168.49.2 --volume functional-652709:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
I1213 10:22:44.665912 347534 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Running}}
I1213 10:22:44.689544 347534 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
I1213 10:22:44.718557 347534 cli_runner.go:164] Run: docker exec functional-652709 stat /var/lib/dpkg/alternatives/iptables
I1213 10:22:44.769260 347534 oci.go:144] the created container "functional-652709" has a running status.
I1213 10:22:44.769280 347534 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa...
I1213 10:22:44.807632 347534 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1213 10:22:44.830872 347534 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
I1213 10:22:44.853184 347534 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1213 10:22:44.853196 347534 kic_runner.go:114] Args: [docker exec --privileged functional-652709 chown docker:docker /home/docker/.ssh/authorized_keys]
I1213 10:22:44.906572 347534 cli_runner.go:164] Run: docker container inspect functional-652709 --format={{.State.Status}}
I1213 10:22:44.928631 347534 machine.go:94] provisionDockerMachine start ...
I1213 10:22:44.928734 347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:22:44.952127 347534 main.go:143] libmachine: Using SSH client type: native
I1213 10:22:44.952486 347534 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33125 <nil> <nil>}
I1213 10:22:44.952503 347534 main.go:143] libmachine: About to run SSH command:
hostname
I1213 10:22:44.953140 347534 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1213 10:22:48.106435 347534 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
I1213 10:22:48.106459 347534 ubuntu.go:182] provisioning hostname "functional-652709"
I1213 10:22:48.106528 347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:22:48.124661 347534 main.go:143] libmachine: Using SSH client type: native
I1213 10:22:48.125032 347534 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33125 <nil> <nil>}
I1213 10:22:48.125042 347534 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-652709 && echo "functional-652709" | sudo tee /etc/hostname
I1213 10:22:48.284292 347534 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-652709
I1213 10:22:48.284364 347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:22:48.301930 347534 main.go:143] libmachine: Using SSH client type: native
I1213 10:22:48.302240 347534 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 33125 <nil> <nil>}
I1213 10:22:48.302254 347534 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-652709' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-652709/g' /etc/hosts;
else
echo '127.0.1.1 functional-652709' | sudo tee -a /etc/hosts;
fi
fi
I1213 10:22:48.451049 347534 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1213 10:22:48.451066 347534 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22127-307042/.minikube CaCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22127-307042/.minikube}
I1213 10:22:48.451095 347534 ubuntu.go:190] setting up certificates
I1213 10:22:48.451103 347534 provision.go:84] configureAuth start
I1213 10:22:48.451161 347534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
I1213 10:22:48.469143 347534 provision.go:143] copyHostCerts
I1213 10:22:48.469210 347534 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem, removing ...
I1213 10:22:48.469218 347534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem
I1213 10:22:48.469294 347534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/ca.pem (1082 bytes)
I1213 10:22:48.469390 347534 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem, removing ...
I1213 10:22:48.469394 347534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem
I1213 10:22:48.469422 347534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/cert.pem (1123 bytes)
I1213 10:22:48.469472 347534 exec_runner.go:144] found /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem, removing ...
I1213 10:22:48.469475 347534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem
I1213 10:22:48.469498 347534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22127-307042/.minikube/key.pem (1675 bytes)
I1213 10:22:48.469541 347534 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem org=jenkins.functional-652709 san=[127.0.0.1 192.168.49.2 functional-652709 localhost minikube]
I1213 10:22:48.570422 347534 provision.go:177] copyRemoteCerts
I1213 10:22:48.570474 347534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1213 10:22:48.570512 347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:22:48.587198 347534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
I1213 10:22:48.690421 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1213 10:22:48.707570 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1213 10:22:48.725248 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1213 10:22:48.742477 347534 provision.go:87] duration metric: took 291.350942ms to configureAuth
I1213 10:22:48.742495 347534 ubuntu.go:206] setting minikube options for container-runtime
I1213 10:22:48.742786 347534 config.go:182] Loaded profile config "functional-652709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 10:22:48.742795 347534 machine.go:97] duration metric: took 3.8141523s to provisionDockerMachine
I1213 10:22:48.742801 347534 client.go:176] duration metric: took 9.076081213s to LocalClient.Create
I1213 10:22:48.742825 347534 start.go:167] duration metric: took 9.076167573s to libmachine.API.Create "functional-652709"
I1213 10:22:48.742832 347534 start.go:293] postStartSetup for "functional-652709" (driver="docker")
I1213 10:22:48.742841 347534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1213 10:22:48.742897 347534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1213 10:22:48.742977 347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:22:48.760384 347534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
I1213 10:22:48.866658 347534 ssh_runner.go:195] Run: cat /etc/os-release
I1213 10:22:48.870023 347534 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1213 10:22:48.870040 347534 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1213 10:22:48.870050 347534 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/addons for local assets ...
I1213 10:22:48.870112 347534 filesync.go:126] Scanning /home/jenkins/minikube-integration/22127-307042/.minikube/files for local assets ...
I1213 10:22:48.870201 347534 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem -> 3089152.pem in /etc/ssl/certs
I1213 10:22:48.870281 347534 filesync.go:149] local asset: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts -> hosts in /etc/test/nested/copy/308915
I1213 10:22:48.870329 347534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/308915
I1213 10:22:48.877987 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /etc/ssl/certs/3089152.pem (1708 bytes)
I1213 10:22:48.895388 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/test/nested/copy/308915/hosts --> /etc/test/nested/copy/308915/hosts (40 bytes)
I1213 10:22:48.913143 347534 start.go:296] duration metric: took 170.296932ms for postStartSetup
I1213 10:22:48.913501 347534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
I1213 10:22:48.931292 347534 profile.go:143] Saving config to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/config.json ...
I1213 10:22:48.931575 347534 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1213 10:22:48.931614 347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:22:48.948293 347534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
I1213 10:22:49.051913 347534 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1213 10:22:49.056599 347534 start.go:128] duration metric: took 9.393749657s to createHost
I1213 10:22:49.056613 347534 start.go:83] releasing machines lock for "functional-652709", held for 9.393929393s
I1213 10:22:49.056687 347534 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-652709
I1213 10:22:49.077337 347534 out.go:179] * Found network options:
I1213 10:22:49.080202 347534 out.go:179] - HTTP_PROXY=localhost:46303
W1213 10:22:49.083133 347534 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1213 10:22:49.086024 347534 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1213 10:22:49.088946 347534 ssh_runner.go:195] Run: cat /version.json
I1213 10:22:49.089012 347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:22:49.089013 347534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1213 10:22:49.089069 347534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-652709
I1213 10:22:49.117533 347534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
I1213 10:22:49.124293 347534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33125 SSHKeyPath:/home/jenkins/minikube-integration/22127-307042/.minikube/machines/functional-652709/id_rsa Username:docker}
I1213 10:22:49.222429 347534 ssh_runner.go:195] Run: systemctl --version
I1213 10:22:49.315468 347534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1213 10:22:49.319814 347534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1213 10:22:49.319884 347534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1213 10:22:49.345944 347534 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1213 10:22:49.345957 347534 start.go:496] detecting cgroup driver to use...
I1213 10:22:49.345987 347534 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1213 10:22:49.346035 347534 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1213 10:22:49.360971 347534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1213 10:22:49.373747 347534 docker.go:218] disabling cri-docker service (if available) ...
I1213 10:22:49.373807 347534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1213 10:22:49.391534 347534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1213 10:22:49.410094 347534 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1213 10:22:49.528445 347534 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1213 10:22:49.657591 347534 docker.go:234] disabling docker service ...
I1213 10:22:49.657663 347534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1213 10:22:49.682596 347534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1213 10:22:49.695994 347534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1213 10:22:49.811277 347534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1213 10:22:49.922872 347534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1213 10:22:49.936025 347534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1213 10:22:49.949772 347534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1213 10:22:49.959041 347534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1213 10:22:49.968328 347534 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1213 10:22:49.968395 347534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1213 10:22:49.977747 347534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1213 10:22:49.987321 347534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1213 10:22:49.997250 347534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1213 10:22:50.019551 347534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1213 10:22:50.028873 347534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1213 10:22:50.039465 347534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1213 10:22:50.049007 347534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1213 10:22:50.058658 347534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1213 10:22:50.066833 347534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1213 10:22:50.074883 347534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 10:22:50.207353 347534 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1213 10:22:50.344034 347534 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1213 10:22:50.344107 347534 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1213 10:22:50.348005 347534 start.go:564] Will wait 60s for crictl version
I1213 10:22:50.348057 347534 ssh_runner.go:195] Run: which crictl
I1213 10:22:50.351497 347534 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1213 10:22:50.374963 347534 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1213 10:22:50.375051 347534 ssh_runner.go:195] Run: containerd --version
I1213 10:22:50.395858 347534 ssh_runner.go:195] Run: containerd --version
I1213 10:22:50.421963 347534 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
I1213 10:22:50.425068 347534 cli_runner.go:164] Run: docker network inspect functional-652709 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 10:22:50.442871 347534 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1213 10:22:50.446595 347534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1213 10:22:50.457009 347534 kubeadm.go:884] updating cluster {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1213 10:22:50.457115 347534 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 10:22:50.457188 347534 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 10:22:50.482070 347534 containerd.go:627] all images are preloaded for containerd runtime.
I1213 10:22:50.482082 347534 containerd.go:534] Images already preloaded, skipping extraction
I1213 10:22:50.482140 347534 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 10:22:50.509294 347534 containerd.go:627] all images are preloaded for containerd runtime.
I1213 10:22:50.509306 347534 cache_images.go:86] Images are preloaded, skipping loading
I1213 10:22:50.509312 347534 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1213 10:22:50.509401 347534 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-652709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1213 10:22:50.509462 347534 ssh_runner.go:195] Run: sudo crictl info
I1213 10:22:50.534821 347534 cni.go:84] Creating CNI manager for ""
I1213 10:22:50.534833 347534 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1213 10:22:50.534848 347534 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1213 10:22:50.534870 347534 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-652709 NodeName:functional-652709 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1213 10:22:50.534994 347534 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-652709"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1213 10:22:50.535059 347534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1213 10:22:50.542985 347534 binaries.go:51] Found k8s binaries, skipping transfer
I1213 10:22:50.543043 347534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1213 10:22:50.550654 347534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1213 10:22:50.563543 347534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1213 10:22:50.577040 347534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1213 10:22:50.590534 347534 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1213 10:22:50.594062 347534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1213 10:22:50.603707 347534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 10:22:50.719523 347534 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1213 10:22:50.736125 347534 certs.go:69] Setting up /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709 for IP: 192.168.49.2
I1213 10:22:50.736136 347534 certs.go:195] generating shared ca certs ...
I1213 10:22:50.736150 347534 certs.go:227] acquiring lock for ca certs: {Name:mkca84a85b1b664dba08ef567e84d493256f825e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 10:22:50.736314 347534 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key
I1213 10:22:50.736357 347534 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key
I1213 10:22:50.736364 347534 certs.go:257] generating profile certs ...
I1213 10:22:50.736418 347534 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key
I1213 10:22:50.736427 347534 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt with IP's: []
I1213 10:22:51.182304 347534 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt ...
I1213 10:22:51.182320 347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.crt: {Name:mke1b6d7e6424580fd39d75cde2a9ed5cfcf2718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 10:22:51.182527 347534 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key ...
I1213 10:22:51.182533 347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/client.key: {Name:mkaec4010f0e41820acbef473dc41ecc4824f0f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 10:22:51.182630 347534 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1
I1213 10:22:51.182641 347534 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt.86e7afd1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1213 10:22:51.355751 347534 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt.86e7afd1 ...
I1213 10:22:51.355766 347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt.86e7afd1: {Name:mk625ae06d684270f1a880352d905723e4d9cae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 10:22:51.355944 347534 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1 ...
I1213 10:22:51.355955 347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1: {Name:mk46d2406f9ca3b99d70727d83a0decbfedb1fe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 10:22:51.356035 347534 certs.go:382] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt.86e7afd1 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt
I1213 10:22:51.356141 347534 certs.go:386] copying /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key.86e7afd1 -> /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key
I1213 10:22:51.356232 347534 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key
I1213 10:22:51.356243 347534 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt with IP's: []
I1213 10:22:51.538627 347534 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt ...
I1213 10:22:51.538641 347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt: {Name:mkaec43616cd7e9caf744292336b5f7c8de54b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 10:22:51.538831 347534 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key ...
I1213 10:22:51.538840 347534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key: {Name:mk250d4e62be0df79e0e99621e601bc0253543ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 10:22:51.539033 347534 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem (1338 bytes)
W1213 10:22:51.539074 347534 certs.go:480] ignoring /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915_empty.pem, impossibly tiny 0 bytes
I1213 10:22:51.539081 347534 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca-key.pem (1675 bytes)
I1213 10:22:51.539113 347534 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/ca.pem (1082 bytes)
I1213 10:22:51.539138 347534 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/cert.pem (1123 bytes)
I1213 10:22:51.539162 347534 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/certs/key.pem (1675 bytes)
I1213 10:22:51.539208 347534 certs.go:484] found cert: /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem (1708 bytes)
I1213 10:22:51.539807 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1213 10:22:51.558901 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1213 10:22:51.576898 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1213 10:22:51.594740 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1213 10:22:51.613783 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1213 10:22:51.632331 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1213 10:22:51.649950 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1213 10:22:51.667691 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/profiles/functional-652709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1213 10:22:51.685767 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/certs/308915.pem --> /usr/share/ca-certificates/308915.pem (1338 bytes)
I1213 10:22:51.703954 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/files/etc/ssl/certs/3089152.pem --> /usr/share/ca-certificates/3089152.pem (1708 bytes)
I1213 10:22:51.722146 347534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22127-307042/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1213 10:22:51.739724 347534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1213 10:22:51.752504 347534 ssh_runner.go:195] Run: openssl version
I1213 10:22:51.758624 347534 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/308915.pem
I1213 10:22:51.766277 347534 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/308915.pem /etc/ssl/certs/308915.pem
I1213 10:22:51.773619 347534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/308915.pem
I1213 10:22:51.777349 347534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 10:22 /usr/share/ca-certificates/308915.pem
I1213 10:22:51.777407 347534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/308915.pem
I1213 10:22:51.818655 347534 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1213 10:22:51.826274 347534 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/308915.pem /etc/ssl/certs/51391683.0
I1213 10:22:51.833740 347534 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3089152.pem
I1213 10:22:51.841289 347534 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3089152.pem /etc/ssl/certs/3089152.pem
I1213 10:22:51.848802 347534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3089152.pem
I1213 10:22:51.852407 347534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 10:22 /usr/share/ca-certificates/3089152.pem
I1213 10:22:51.852462 347534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3089152.pem
I1213 10:22:51.893814 347534 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1213 10:22:51.901368 347534 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3089152.pem /etc/ssl/certs/3ec20f2e.0
I1213 10:22:51.908943 347534 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1213 10:22:51.916542 347534 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1213 10:22:51.924152 347534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1213 10:22:51.927806 347534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 10:13 /usr/share/ca-certificates/minikubeCA.pem
I1213 10:22:51.927863 347534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1213 10:22:51.968628 347534 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1213 10:22:51.976324 347534 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1213 10:22:51.984190 347534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1213 10:22:51.987955 347534 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1213 10:22:51.988010 347534 kubeadm.go:401] StartCluster: {Name:functional-652709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-652709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 10:22:51.988103 347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1213 10:22:51.988161 347534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1213 10:22:52.023448 347534 cri.go:89] found id: ""
I1213 10:22:52.023514 347534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1213 10:22:52.031813 347534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1213 10:22:52.040029 347534 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1213 10:22:52.040106 347534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1213 10:22:52.048323 347534 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1213 10:22:52.048341 347534 kubeadm.go:158] found existing configuration files:
I1213 10:22:52.048403 347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1213 10:22:52.056328 347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1213 10:22:52.056393 347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1213 10:22:52.064100 347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1213 10:22:52.072005 347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1213 10:22:52.072061 347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1213 10:22:52.079997 347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1213 10:22:52.088156 347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1213 10:22:52.088216 347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1213 10:22:52.096056 347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1213 10:22:52.104211 347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1213 10:22:52.104272 347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1213 10:22:52.112247 347534 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1213 10:22:52.152816 347534 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1213 10:22:52.152866 347534 kubeadm.go:319] [preflight] Running pre-flight checks
I1213 10:22:52.255492 347534 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1213 10:22:52.255557 347534 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1213 10:22:52.255592 347534 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1213 10:22:52.255635 347534 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1213 10:22:52.255682 347534 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1213 10:22:52.255728 347534 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1213 10:22:52.255775 347534 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1213 10:22:52.255822 347534 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1213 10:22:52.255870 347534 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1213 10:22:52.255914 347534 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1213 10:22:52.255961 347534 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1213 10:22:52.256006 347534 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1213 10:22:52.326450 347534 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1213 10:22:52.326553 347534 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1213 10:22:52.326642 347534 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1213 10:22:52.335210 347534 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1213 10:22:52.341846 347534 out.go:252] - Generating certificates and keys ...
I1213 10:22:52.341957 347534 kubeadm.go:319] [certs] Using existing ca certificate authority
I1213 10:22:52.342034 347534 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1213 10:22:52.644701 347534 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1213 10:22:52.704498 347534 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1213 10:22:52.866219 347534 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1213 10:22:53.548467 347534 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1213 10:22:54.312965 347534 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1213 10:22:54.313105 347534 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-652709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1213 10:22:54.783858 347534 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1213 10:22:54.784181 347534 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-652709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1213 10:22:54.991172 347534 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1213 10:22:55.395053 347534 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1213 10:22:55.782378 347534 kubeadm.go:319] [certs] Generating "sa" key and public key
I1213 10:22:55.782452 347534 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1213 10:22:56.182323 347534 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1213 10:22:56.665880 347534 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1213 10:22:56.877677 347534 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1213 10:22:57.168905 347534 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1213 10:22:58.263186 347534 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1213 10:22:58.263819 347534 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1213 10:22:58.266856 347534 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1213 10:22:58.270397 347534 out.go:252] - Booting up control plane ...
I1213 10:22:58.270498 347534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1213 10:22:58.270574 347534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1213 10:22:58.270640 347534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1213 10:22:58.285618 347534 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1213 10:22:58.285850 347534 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1213 10:22:58.295228 347534 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1213 10:22:58.295330 347534 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1213 10:22:58.295369 347534 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1213 10:22:58.423186 347534 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1213 10:22:58.423299 347534 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1213 10:26:58.419237 347534 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001296773s
I1213 10:26:58.419260 347534 kubeadm.go:319]
I1213 10:26:58.419312 347534 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1213 10:26:58.419343 347534 kubeadm.go:319] - The kubelet is not running
I1213 10:26:58.419440 347534 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1213 10:26:58.419445 347534 kubeadm.go:319]
I1213 10:26:58.419542 347534 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1213 10:26:58.419571 347534 kubeadm.go:319] - 'systemctl status kubelet'
I1213 10:26:58.419599 347534 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1213 10:26:58.419602 347534 kubeadm.go:319]
I1213 10:26:58.425066 347534 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1213 10:26:58.425466 347534 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1213 10:26:58.425567 347534 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1213 10:26:58.425788 347534 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1213 10:26:58.425792 347534 kubeadm.go:319]
I1213 10:26:58.425855 347534 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1213 10:26:58.425960 347534 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-652709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-652709 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001296773s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1213 10:26:58.426053 347534 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1213 10:26:58.845571 347534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1213 10:26:58.858891 347534 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1213 10:26:58.858946 347534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1213 10:26:58.866822 347534 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1213 10:26:58.866831 347534 kubeadm.go:158] found existing configuration files:
I1213 10:26:58.866880 347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1213 10:26:58.874351 347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1213 10:26:58.874411 347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1213 10:26:58.881444 347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1213 10:26:58.889131 347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1213 10:26:58.889192 347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1213 10:26:58.896496 347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1213 10:26:58.903940 347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1213 10:26:58.903995 347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1213 10:26:58.911383 347534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1213 10:26:58.918878 347534 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1213 10:26:58.918935 347534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1213 10:26:58.926388 347534 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1213 10:26:58.966153 347534 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1213 10:26:58.966388 347534 kubeadm.go:319] [preflight] Running pre-flight checks
I1213 10:26:59.038263 347534 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1213 10:26:59.038324 347534 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1213 10:26:59.038360 347534 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1213 10:26:59.038402 347534 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1213 10:26:59.038445 347534 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1213 10:26:59.038488 347534 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1213 10:26:59.038533 347534 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1213 10:26:59.038577 347534 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1213 10:26:59.038621 347534 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1213 10:26:59.038662 347534 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1213 10:26:59.038718 347534 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1213 10:26:59.038761 347534 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1213 10:26:59.101715 347534 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1213 10:26:59.101849 347534 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1213 10:26:59.101956 347534 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1213 10:26:59.111063 347534 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1213 10:26:59.114418 347534 out.go:252] - Generating certificates and keys ...
I1213 10:26:59.114518 347534 kubeadm.go:319] [certs] Using existing ca certificate authority
I1213 10:26:59.114598 347534 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1213 10:26:59.114681 347534 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1213 10:26:59.114803 347534 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1213 10:26:59.114868 347534 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1213 10:26:59.114918 347534 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1213 10:26:59.114983 347534 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1213 10:26:59.115048 347534 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1213 10:26:59.115144 347534 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1213 10:26:59.115222 347534 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1213 10:26:59.115258 347534 kubeadm.go:319] [certs] Using the existing "sa" key
I1213 10:26:59.115311 347534 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1213 10:26:59.170504 347534 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1213 10:26:59.450680 347534 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1213 10:26:59.886874 347534 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1213 10:27:00.166515 347534 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1213 10:27:00.523183 347534 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1213 10:27:00.523691 347534 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1213 10:27:00.526396 347534 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1213 10:27:00.529466 347534 out.go:252] - Booting up control plane ...
I1213 10:27:00.529571 347534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1213 10:27:00.529648 347534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1213 10:27:00.529714 347534 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1213 10:27:00.550934 347534 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1213 10:27:00.551347 347534 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1213 10:27:00.559929 347534 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1213 10:27:00.560370 347534 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1213 10:27:00.560605 347534 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1213 10:27:00.691215 347534 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1213 10:27:00.691328 347534 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1213 10:31:00.687036 347534 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000358804s
I1213 10:31:00.695089 347534 kubeadm.go:319]
I1213 10:31:00.695237 347534 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1213 10:31:00.695275 347534 kubeadm.go:319] - The kubelet is not running
I1213 10:31:00.695400 347534 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1213 10:31:00.695405 347534 kubeadm.go:319]
I1213 10:31:00.695529 347534 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1213 10:31:00.695562 347534 kubeadm.go:319] - 'systemctl status kubelet'
I1213 10:31:00.695600 347534 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1213 10:31:00.695604 347534 kubeadm.go:319]
I1213 10:31:00.700193 347534 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1213 10:31:00.700668 347534 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1213 10:31:00.700794 347534 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1213 10:31:00.701077 347534 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1213 10:31:00.701082 347534 kubeadm.go:319]
I1213 10:31:00.701155 347534 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1213 10:31:00.701216 347534 kubeadm.go:403] duration metric: took 8m8.71320916s to StartCluster
I1213 10:31:00.701258 347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1213 10:31:00.701341 347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1213 10:31:00.727702 347534 cri.go:89] found id: ""
I1213 10:31:00.727730 347534 logs.go:282] 0 containers: []
W1213 10:31:00.727737 347534 logs.go:284] No container was found matching "kube-apiserver"
I1213 10:31:00.727743 347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1213 10:31:00.727810 347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1213 10:31:00.752310 347534 cri.go:89] found id: ""
I1213 10:31:00.752325 347534 logs.go:282] 0 containers: []
W1213 10:31:00.752332 347534 logs.go:284] No container was found matching "etcd"
I1213 10:31:00.752336 347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1213 10:31:00.752393 347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1213 10:31:00.776946 347534 cri.go:89] found id: ""
I1213 10:31:00.776960 347534 logs.go:282] 0 containers: []
W1213 10:31:00.776967 347534 logs.go:284] No container was found matching "coredns"
I1213 10:31:00.776972 347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1213 10:31:00.777027 347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1213 10:31:00.800007 347534 cri.go:89] found id: ""
I1213 10:31:00.800021 347534 logs.go:282] 0 containers: []
W1213 10:31:00.800028 347534 logs.go:284] No container was found matching "kube-scheduler"
I1213 10:31:00.800033 347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1213 10:31:00.800091 347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1213 10:31:00.824757 347534 cri.go:89] found id: ""
I1213 10:31:00.824771 347534 logs.go:282] 0 containers: []
W1213 10:31:00.824778 347534 logs.go:284] No container was found matching "kube-proxy"
I1213 10:31:00.824783 347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1213 10:31:00.824840 347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1213 10:31:00.849594 347534 cri.go:89] found id: ""
I1213 10:31:00.849608 347534 logs.go:282] 0 containers: []
W1213 10:31:00.849615 347534 logs.go:284] No container was found matching "kube-controller-manager"
I1213 10:31:00.849622 347534 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1213 10:31:00.849680 347534 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1213 10:31:00.877004 347534 cri.go:89] found id: ""
I1213 10:31:00.877019 347534 logs.go:282] 0 containers: []
W1213 10:31:00.877026 347534 logs.go:284] No container was found matching "kindnet"
I1213 10:31:00.877035 347534 logs.go:123] Gathering logs for kubelet ...
I1213 10:31:00.877046 347534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1213 10:31:00.933417 347534 logs.go:123] Gathering logs for dmesg ...
I1213 10:31:00.933437 347534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1213 10:31:00.949839 347534 logs.go:123] Gathering logs for describe nodes ...
I1213 10:31:00.949858 347534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1213 10:31:01.016526 347534 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1213 10:31:01.007626 4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 10:31:01.008312 4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 10:31:01.010133 4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 10:31:01.010946 4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 10:31:01.012542 4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1213 10:31:01.007626 4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 10:31:01.008312 4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 10:31:01.010133 4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 10:31:01.010946 4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 10:31:01.012542 4805 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1213 10:31:01.016537 347534 logs.go:123] Gathering logs for containerd ...
I1213 10:31:01.016548 347534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1213 10:31:01.054604 347534 logs.go:123] Gathering logs for container status ...
I1213 10:31:01.054625 347534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1213 10:31:01.084905 347534 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000358804s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1213 10:31:01.084943 347534 out.go:285] *
W1213 10:31:01.085001 347534 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000358804s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1213 10:31:01.085017 347534 out.go:285] *
W1213 10:31:01.091466 347534 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1213 10:31:01.098112 347534 out.go:203]
W1213 10:31:01.101014 347534 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000358804s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1213 10:31:01.101069 347534 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1213 10:31:01.101089 347534 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1213 10:31:01.104082 347534 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.286559306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.286625243Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.286802697Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.286894546Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.286955101Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.287017616Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.287084579Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.287157860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.287231370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.287320372Z" level=info msg="Connect containerd service"
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.287675083Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.288415070Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.302170119Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.302336151Z" level=info msg="Start subscribing containerd event"
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.302412771Z" level=info msg="Start recovering state"
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.302463783Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340466841Z" level=info msg="Start event monitor"
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340652516Z" level=info msg="Start cni network conf syncer for default"
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340721850Z" level=info msg="Start streaming server"
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340785925Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340840260Z" level=info msg="runtime interface starting up..."
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340890927Z" level=info msg="starting plugins..."
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.340987814Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 13 10:22:50 functional-652709 containerd[763]: time="2025-12-13T10:22:50.341230852Z" level=info msg="containerd successfully booted in 0.080598s"
Dec 13 10:22:50 functional-652709 systemd[1]: Started containerd.service - containerd container runtime.
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1213 10:31:02.122187 4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 10:31:02.122792 4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 10:31:02.124658 4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 10:31:02.125244 4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 10:31:02.126987 4923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[ +12.907693] overlayfs: idmapped layers are currently not supported
[Dec13 09:25] overlayfs: idmapped layers are currently not supported
[ +26.192425] overlayfs: idmapped layers are currently not supported
[Dec13 09:26] overlayfs: idmapped layers are currently not supported
[ +25.729788] overlayfs: idmapped layers are currently not supported
[Dec13 09:27] overlayfs: idmapped layers are currently not supported
[Dec13 09:28] overlayfs: idmapped layers are currently not supported
[Dec13 09:31] overlayfs: idmapped layers are currently not supported
[Dec13 09:32] overlayfs: idmapped layers are currently not supported
[ +17.979093] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[Dec13 09:43] overlayfs: idmapped layers are currently not supported
[Dec13 09:45] overlayfs: idmapped layers are currently not supported
[ +25.885102] overlayfs: idmapped layers are currently not supported
[Dec13 09:46] overlayfs: idmapped layers are currently not supported
[ +22.078149] overlayfs: idmapped layers are currently not supported
[Dec13 09:47] overlayfs: idmapped layers are currently not supported
[Dec13 09:48] overlayfs: idmapped layers are currently not supported
[Dec13 09:49] overlayfs: idmapped layers are currently not supported
[Dec13 09:51] overlayfs: idmapped layers are currently not supported
[ +17.043564] overlayfs: idmapped layers are currently not supported
[Dec13 09:52] overlayfs: idmapped layers are currently not supported
[Dec13 09:53] overlayfs: idmapped layers are currently not supported
[Dec13 10:12] kauditd_printk_skb: 8 callbacks suppressed
[Dec13 10:19] hrtimer: interrupt took 21247146 ns
==> kernel <==
10:31:02 up 3:13, 0 user, load average: 0.32, 0.54, 1.04
Linux functional-652709 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 13 10:30:59 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 10:30:59 functional-652709 kubelet[4723]: E1213 10:30:59.203770 4723 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 13 10:30:59 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 10:30:59 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 10:30:59 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 13 10:30:59 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 10:30:59 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 10:30:59 functional-652709 kubelet[4729]: E1213 10:30:59.955749 4729 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 13 10:30:59 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 10:30:59 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 10:31:00 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 13 10:31:00 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 10:31:00 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 10:31:00 functional-652709 kubelet[4734]: E1213 10:31:00.716183 4734 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 13 10:31:00 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 10:31:00 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 10:31:01 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 13 10:31:01 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 10:31:01 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 10:31:01 functional-652709 kubelet[4835]: E1213 10:31:01.457933 4835 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 13 10:31:01 functional-652709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 10:31:01 functional-652709 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 10:31:02 functional-652709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 13 10:31:02 functional-652709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 10:31:02 functional-652709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-652709 -n functional-652709: exit status 6 (341.316645ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1213 10:31:02.604146 353323 status.go:458] kubeconfig endpoint: get endpoint: "functional-652709" does not appear in /home/jenkins/minikube-integration/22127-307042/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-652709" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (503.24s)