=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 08:39:19.930450 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:41:36.065753 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:03.776937 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.331506 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.337857 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.349359 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.370810 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.412272 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.493653 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.655352 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:57.977062 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:58.618600 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:42:59.900030 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:02.461492 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:07.583416 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:17.825584 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:43:38.307872 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:44:19.270927 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:45:41.192412 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-181746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:46:36.062179 4292 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/addons-962295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.494585893s)
-- stdout --
* [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22049
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-090986" primary control-plane node in "functional-090986" cluster
* Pulling base image v0.0.48-1764843390-22032 ...
* Found network options:
- HTTP_PROXY=localhost:37029
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:37029 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-090986 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-090986 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001310933s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001195219s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001195219s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-090986
helpers_test.go:243: (dbg) docker inspect functional-090986:
-- stdout --
[
{
"Id": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
"Created": "2025-12-06T08:38:54.137142754Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 43250,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-06T08:38:54.209992266Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
"ResolvConfPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hostname",
"HostsPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/hosts",
"LogPath": "/var/lib/docker/containers/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3/0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3-json.log",
"Name": "/functional-090986",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-090986:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-090986",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "0202a22115dfc3e21f6dc3375abd5da95eb8100e5b13b079e1c6b7d2cfeacfb3",
"LowerDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5-init/diff:/var/lib/docker/overlay2/9859823a1e6d9795ce39330197ee2f0d4ebbed0af0bdd4e7bf4eb1c7d1658e65/diff",
"MergedDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/merged",
"UpperDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/diff",
"WorkDir": "/var/lib/docker/overlay2/ff9c74b0fa5f527881c5b976f1526cb7eac808abe50318fd9997e1cc2f7496b5/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-090986",
"Source": "/var/lib/docker/volumes/functional-090986/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-090986",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-090986",
"name.minikube.sigs.k8s.io": "functional-090986",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "96a7b0ec258444d1c8ac066405cac717b46821086eaad82018730483660c1220",
"SandboxKey": "/var/run/docker/netns/96a7b0ec2584",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32788"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32789"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32792"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32790"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32791"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-090986": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "ee:de:4e:f1:7a:31",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "decfdd2806a4e3ecb1801260e31578d759fe2e36041a31e857e5638a924a6984",
"EndpointID": "9e81653c5d5c3ed84aba6e787365ffae307a192fae40947ac9de94cf993b2d90",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-090986",
"0202a22115df"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-090986 -n functional-090986: exit status 6 (330.475238ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1206 08:47:09.893250 48391 status.go:458] kubeconfig endpoint: get endpoint: "functional-090986" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p functional-090986 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ mount │ -p functional-181746 --kill=true │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ │
│ addons │ functional-181746 addons list │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ addons │ functional-181746 addons list -o json │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ service │ functional-181746 service hello-node-connect --url │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ start │ -p functional-181746 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=containerd │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ │
│ start │ -p functional-181746 --dry-run --alsologtostderr -v=1 --driver=docker --container-runtime=containerd │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ │
│ service │ functional-181746 service list │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ start │ -p functional-181746 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=containerd │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ │
│ dashboard │ --url --port 36195 -p functional-181746 --alsologtostderr -v=1 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ service │ functional-181746 service list -o json │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ service │ functional-181746 service --namespace=default --https --url hello-node │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ service │ functional-181746 service hello-node --url --format={{.IP}} │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ service │ functional-181746 service hello-node --url │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ image │ functional-181746 image ls --format short --alsologtostderr │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ image │ functional-181746 image ls --format yaml --alsologtostderr │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ ssh │ functional-181746 ssh pgrep buildkitd │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ │
│ image │ functional-181746 image build -t localhost/my-image:functional-181746 testdata/build --alsologtostderr │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ image │ functional-181746 image ls --format json --alsologtostderr │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ image │ functional-181746 image ls --format table --alsologtostderr │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ update-context │ functional-181746 update-context --alsologtostderr -v=2 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ update-context │ functional-181746 update-context --alsologtostderr -v=2 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ update-context │ functional-181746 update-context --alsologtostderr -v=2 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ image │ functional-181746 image ls │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ delete │ -p functional-181746 │ functional-181746 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ 06 Dec 25 08:38 UTC │
│ start │ -p functional-090986 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-090986 │ jenkins │ v1.37.0 │ 06 Dec 25 08:38 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/06 08:38:49
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.25.3 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1206 08:38:49.100563 42853 out.go:360] Setting OutFile to fd 1 ...
I1206 08:38:49.100665 42853 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:49.100668 42853 out.go:374] Setting ErrFile to fd 2...
I1206 08:38:49.100674 42853 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:38:49.101085 42853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-2448/.minikube/bin
I1206 08:38:49.101920 42853 out.go:368] Setting JSON to false
I1206 08:38:49.102709 42853 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1280,"bootTime":1765009049,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1206 08:38:49.102766 42853 start.go:143] virtualization:
I1206 08:38:49.107187 42853 out.go:179] * [functional-090986] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1206 08:38:49.111876 42853 out.go:179] - MINIKUBE_LOCATION=22049
I1206 08:38:49.111976 42853 notify.go:221] Checking for updates...
I1206 08:38:49.119282 42853 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1206 08:38:49.122526 42853 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22049-2448/kubeconfig
I1206 08:38:49.125743 42853 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-2448/.minikube
I1206 08:38:49.128895 42853 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1206 08:38:49.131988 42853 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1206 08:38:49.135284 42853 driver.go:422] Setting default libvirt URI to qemu:///system
I1206 08:38:49.154349 42853 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1206 08:38:49.154459 42853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 08:38:49.216433 42853 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-06 08:38:49.207353366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 08:38:49.216521 42853 docker.go:319] overlay module found
I1206 08:38:49.219797 42853 out.go:179] * Using the docker driver based on user configuration
I1206 08:38:49.222835 42853 start.go:309] selected driver: docker
I1206 08:38:49.222843 42853 start.go:927] validating driver "docker" against <nil>
I1206 08:38:49.222865 42853 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1206 08:38:49.223700 42853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 08:38:49.276155 42853 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-06 08:38:49.266928356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 08:38:49.276307 42853 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1206 08:38:49.276529 42853 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1206 08:38:49.279516 42853 out.go:179] * Using Docker driver with root privileges
I1206 08:38:49.282477 42853 cni.go:84] Creating CNI manager for ""
I1206 08:38:49.282544 42853 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1206 08:38:49.282551 42853 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1206 08:38:49.282659 42853 start.go:353] cluster config:
{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1206 08:38:49.285936 42853 out.go:179] * Starting "functional-090986" primary control-plane node in "functional-090986" cluster
I1206 08:38:49.289083 42853 cache.go:134] Beginning downloading kic base image for docker with containerd
I1206 08:38:49.292088 42853 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
I1206 08:38:49.295021 42853 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1206 08:38:49.295058 42853 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
I1206 08:38:49.295066 42853 cache.go:65] Caching tarball of preloaded images
I1206 08:38:49.295103 42853 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
I1206 08:38:49.295146 42853 preload.go:238] Found /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1206 08:38:49.295155 42853 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
I1206 08:38:49.295595 42853 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
I1206 08:38:49.295614 42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json: {Name:mk3148d8af8d6ef4b551b6331eae19668215bd59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:38:49.314580 42853 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
I1206 08:38:49.314590 42853 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
I1206 08:38:49.314616 42853 cache.go:243] Successfully downloaded all kic artifacts
I1206 08:38:49.314637 42853 start.go:360] acquireMachinesLock for functional-090986: {Name:mke7a47c04cec928ef96188b4f2167ea79e00dd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1206 08:38:49.314745 42853 start.go:364] duration metric: took 94.08µs to acquireMachinesLock for "functional-090986"
I1206 08:38:49.314776 42853 start.go:93] Provisioning new machine with config: &{Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1206 08:38:49.314841 42853 start.go:125] createHost starting for "" (driver="docker")
I1206 08:38:49.318155 42853 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1206 08:38:49.318405 42853 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:37029 to docker env.
I1206 08:38:49.318428 42853 start.go:159] libmachine.API.Create for "functional-090986" (driver="docker")
I1206 08:38:49.318450 42853 client.go:173] LocalClient.Create starting
I1206 08:38:49.318528 42853 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem
I1206 08:38:49.318561 42853 main.go:143] libmachine: Decoding PEM data...
I1206 08:38:49.318574 42853 main.go:143] libmachine: Parsing certificate...
I1206 08:38:49.318642 42853 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem
I1206 08:38:49.318662 42853 main.go:143] libmachine: Decoding PEM data...
I1206 08:38:49.318673 42853 main.go:143] libmachine: Parsing certificate...
I1206 08:38:49.319017 42853 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1206 08:38:49.333355 42853 cli_runner.go:211] docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1206 08:38:49.333421 42853 network_create.go:284] running [docker network inspect functional-090986] to gather additional debugging logs...
I1206 08:38:49.333443 42853 cli_runner.go:164] Run: docker network inspect functional-090986
W1206 08:38:49.349114 42853 cli_runner.go:211] docker network inspect functional-090986 returned with exit code 1
I1206 08:38:49.349134 42853 network_create.go:287] error running [docker network inspect functional-090986]: docker network inspect functional-090986: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-090986 not found
I1206 08:38:49.349147 42853 network_create.go:289] output of [docker network inspect functional-090986]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-090986 not found
** /stderr **
I1206 08:38:49.349250 42853 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 08:38:49.370202 42853 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400193c550}
I1206 08:38:49.370235 42853 network_create.go:124] attempt to create docker network functional-090986 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1206 08:38:49.370287 42853 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-090986 functional-090986
I1206 08:38:49.436785 42853 network_create.go:108] docker network functional-090986 192.168.49.0/24 created
I1206 08:38:49.436806 42853 kic.go:121] calculated static IP "192.168.49.2" for the "functional-090986" container
I1206 08:38:49.436893 42853 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1206 08:38:49.452228 42853 cli_runner.go:164] Run: docker volume create functional-090986 --label name.minikube.sigs.k8s.io=functional-090986 --label created_by.minikube.sigs.k8s.io=true
I1206 08:38:49.469493 42853 oci.go:103] Successfully created a docker volume functional-090986
I1206 08:38:49.469571 42853 cli_runner.go:164] Run: docker run --rm --name functional-090986-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-090986 --entrypoint /usr/bin/test -v functional-090986:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
I1206 08:38:50.041707 42853 oci.go:107] Successfully prepared a docker volume functional-090986
I1206 08:38:50.041767 42853 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1206 08:38:50.041776 42853 kic.go:194] Starting extracting preloaded images to volume ...
I1206 08:38:50.041858 42853 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-090986:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
I1206 08:38:54.065453 42853 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22049-2448/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-090986:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.02356114s)
I1206 08:38:54.065476 42853 kic.go:203] duration metric: took 4.023696s to extract preloaded images to volume ...
W1206 08:38:54.065639 42853 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1206 08:38:54.065769 42853 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1206 08:38:54.122100 42853 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-090986 --name functional-090986 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-090986 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-090986 --network functional-090986 --ip 192.168.49.2 --volume functional-090986:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
I1206 08:38:54.427567 42853 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Running}}
I1206 08:38:54.446501 42853 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
I1206 08:38:54.476096 42853 cli_runner.go:164] Run: docker exec functional-090986 stat /var/lib/dpkg/alternatives/iptables
I1206 08:38:54.529518 42853 oci.go:144] the created container "functional-090986" has a running status.
I1206 08:38:54.529538 42853 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa...
I1206 08:38:55.213719 42853 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1206 08:38:55.234130 42853 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
I1206 08:38:55.252089 42853 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1206 08:38:55.252100 42853 kic_runner.go:114] Args: [docker exec --privileged functional-090986 chown docker:docker /home/docker/.ssh/authorized_keys]
I1206 08:38:55.293105 42853 cli_runner.go:164] Run: docker container inspect functional-090986 --format={{.State.Status}}
I1206 08:38:55.311431 42853 machine.go:94] provisionDockerMachine start ...
I1206 08:38:55.311556 42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 08:38:55.328516 42853 main.go:143] libmachine: Using SSH client type: native
I1206 08:38:55.328852 42853 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1206 08:38:55.328859 42853 main.go:143] libmachine: About to run SSH command:
hostname
I1206 08:38:55.329544 42853 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49280->127.0.0.1:32788: read: connection reset by peer
I1206 08:38:58.482997 42853 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
I1206 08:38:58.483011 42853 ubuntu.go:182] provisioning hostname "functional-090986"
I1206 08:38:58.483070 42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 08:38:58.500584 42853 main.go:143] libmachine: Using SSH client type: native
I1206 08:38:58.500890 42853 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1206 08:38:58.500898 42853 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-090986 && echo "functional-090986" | sudo tee /etc/hostname
I1206 08:38:58.665277 42853 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-090986
I1206 08:38:58.665346 42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 08:38:58.683359 42853 main.go:143] libmachine: Using SSH client type: native
I1206 08:38:58.683859 42853 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1206 08:38:58.683873 42853 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-090986' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-090986/g' /etc/hosts;
else
echo '127.0.1.1 functional-090986' | sudo tee -a /etc/hosts;
fi
fi
I1206 08:38:58.835695 42853 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1206 08:38:58.835711 42853 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22049-2448/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-2448/.minikube}
I1206 08:38:58.835731 42853 ubuntu.go:190] setting up certificates
I1206 08:38:58.835740 42853 provision.go:84] configureAuth start
I1206 08:38:58.835805 42853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
I1206 08:38:58.854225 42853 provision.go:143] copyHostCerts
I1206 08:38:58.854290 42853 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem, removing ...
I1206 08:38:58.854297 42853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem
I1206 08:38:58.854375 42853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/ca.pem (1078 bytes)
I1206 08:38:58.854472 42853 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem, removing ...
I1206 08:38:58.854477 42853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem
I1206 08:38:58.854514 42853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/cert.pem (1123 bytes)
I1206 08:38:58.854608 42853 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem, removing ...
I1206 08:38:58.854612 42853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem
I1206 08:38:58.854638 42853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-2448/.minikube/key.pem (1675 bytes)
I1206 08:38:58.854698 42853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem org=jenkins.functional-090986 san=[127.0.0.1 192.168.49.2 functional-090986 localhost minikube]
I1206 08:38:59.087139 42853 provision.go:177] copyRemoteCerts
I1206 08:38:59.087192 42853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1206 08:38:59.087231 42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 08:38:59.104115 42853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
I1206 08:38:59.211112 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1206 08:38:59.229566 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1206 08:38:59.247321 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1206 08:38:59.264487 42853 provision.go:87] duration metric: took 428.723643ms to configureAuth
I1206 08:38:59.264504 42853 ubuntu.go:206] setting minikube options for container-runtime
I1206 08:38:59.264683 42853 config.go:182] Loaded profile config "functional-090986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 08:38:59.264689 42853 machine.go:97] duration metric: took 3.953248131s to provisionDockerMachine
I1206 08:38:59.264694 42853 client.go:176] duration metric: took 9.946239466s to LocalClient.Create
I1206 08:38:59.264717 42853 start.go:167] duration metric: took 9.946287982s to libmachine.API.Create "functional-090986"
I1206 08:38:59.264723 42853 start.go:293] postStartSetup for "functional-090986" (driver="docker")
I1206 08:38:59.264732 42853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1206 08:38:59.264783 42853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1206 08:38:59.264830 42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 08:38:59.281845 42853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
I1206 08:38:59.387952 42853 ssh_runner.go:195] Run: cat /etc/os-release
I1206 08:38:59.391227 42853 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1206 08:38:59.391245 42853 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1206 08:38:59.391255 42853 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/addons for local assets ...
I1206 08:38:59.391312 42853 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-2448/.minikube/files for local assets ...
I1206 08:38:59.391420 42853 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem -> 42922.pem in /etc/ssl/certs
I1206 08:38:59.391514 42853 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts -> hosts in /etc/test/nested/copy/4292
I1206 08:38:59.391556 42853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4292
I1206 08:38:59.399349 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /etc/ssl/certs/42922.pem (1708 bytes)
I1206 08:38:59.417352 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/test/nested/copy/4292/hosts --> /etc/test/nested/copy/4292/hosts (40 bytes)
I1206 08:38:59.435604 42853 start.go:296] duration metric: took 170.868601ms for postStartSetup
I1206 08:38:59.435971 42853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
I1206 08:38:59.452907 42853 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/config.json ...
I1206 08:38:59.453175 42853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1206 08:38:59.453212 42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 08:38:59.470399 42853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
I1206 08:38:59.572241 42853 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1206 08:38:59.576904 42853 start.go:128] duration metric: took 10.262050596s to createHost
I1206 08:38:59.576919 42853 start.go:83] releasing machines lock for "functional-090986", held for 10.262167675s
I1206 08:38:59.577014 42853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-090986
I1206 08:38:59.598202 42853 out.go:179] * Found network options:
I1206 08:38:59.601134 42853 out.go:179] - HTTP_PROXY=localhost:37029
W1206 08:38:59.603909 42853 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1206 08:38:59.606754 42853 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1206 08:38:59.609670 42853 ssh_runner.go:195] Run: cat /version.json
I1206 08:38:59.609712 42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 08:38:59.609746 42853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1206 08:38:59.609794 42853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-090986
I1206 08:38:59.626671 42853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
I1206 08:38:59.627953 42853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22049-2448/.minikube/machines/functional-090986/id_rsa Username:docker}
I1206 08:38:59.731365 42853 ssh_runner.go:195] Run: systemctl --version
I1206 08:38:59.849305 42853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1206 08:38:59.853626 42853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1206 08:38:59.853689 42853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1206 08:38:59.881938 42853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1206 08:38:59.881952 42853 start.go:496] detecting cgroup driver to use...
I1206 08:38:59.882006 42853 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1206 08:38:59.882060 42853 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1206 08:38:59.897313 42853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1206 08:38:59.910768 42853 docker.go:218] disabling cri-docker service (if available) ...
I1206 08:38:59.910820 42853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1206 08:38:59.928532 42853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1206 08:38:59.947026 42853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1206 08:39:00.216371 42853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1206 08:39:00.399200 42853 docker.go:234] disabling docker service ...
I1206 08:39:00.399267 42853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1206 08:39:00.429732 42853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1206 08:39:00.446210 42853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1206 08:39:00.575173 42853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1206 08:39:00.700025 42853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1206 08:39:00.713333 42853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1206 08:39:00.727750 42853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1206 08:39:00.736846 42853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1206 08:39:00.745664 42853 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1206 08:39:00.745731 42853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1206 08:39:00.754556 42853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1206 08:39:00.763405 42853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1206 08:39:00.772176 42853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1206 08:39:00.781049 42853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1206 08:39:00.789239 42853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1206 08:39:00.798100 42853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1206 08:39:00.807004 42853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1206 08:39:00.816519 42853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1206 08:39:00.824172 42853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1206 08:39:00.831530 42853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1206 08:39:00.956548 42853 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1206 08:39:01.106967 42853 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1206 08:39:01.107037 42853 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1206 08:39:01.111627 42853 start.go:564] Will wait 60s for crictl version
I1206 08:39:01.111696 42853 ssh_runner.go:195] Run: which crictl
I1206 08:39:01.116064 42853 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1206 08:39:01.141058 42853 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1206 08:39:01.141136 42853 ssh_runner.go:195] Run: containerd --version
I1206 08:39:01.165442 42853 ssh_runner.go:195] Run: containerd --version
I1206 08:39:01.191176 42853 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
I1206 08:39:01.194282 42853 cli_runner.go:164] Run: docker network inspect functional-090986 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 08:39:01.211170 42853 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1206 08:39:01.215490 42853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1206 08:39:01.225629 42853 kubeadm.go:884] updating cluster {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1206 08:39:01.225802 42853 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1206 08:39:01.225862 42853 ssh_runner.go:195] Run: sudo crictl images --output json
I1206 08:39:01.252238 42853 containerd.go:627] all images are preloaded for containerd runtime.
I1206 08:39:01.252255 42853 containerd.go:534] Images already preloaded, skipping extraction
I1206 08:39:01.252324 42853 ssh_runner.go:195] Run: sudo crictl images --output json
I1206 08:39:01.283469 42853 containerd.go:627] all images are preloaded for containerd runtime.
I1206 08:39:01.283482 42853 cache_images.go:86] Images are preloaded, skipping loading
I1206 08:39:01.283490 42853 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1206 08:39:01.283603 42853 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-090986 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1206 08:39:01.283679 42853 ssh_runner.go:195] Run: sudo crictl info
I1206 08:39:01.310710 42853 cni.go:84] Creating CNI manager for ""
I1206 08:39:01.310721 42853 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1206 08:39:01.310739 42853 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1206 08:39:01.310761 42853 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-090986 NodeName:functional-090986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1206 08:39:01.310873 42853 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-090986"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1206 08:39:01.310944 42853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1206 08:39:01.319058 42853 binaries.go:51] Found k8s binaries, skipping transfer
I1206 08:39:01.319133 42853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1206 08:39:01.327237 42853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1206 08:39:01.340797 42853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1206 08:39:01.354926 42853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1206 08:39:01.369139 42853 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1206 08:39:01.372872 42853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1206 08:39:01.382951 42853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1206 08:39:01.501239 42853 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1206 08:39:01.517883 42853 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986 for IP: 192.168.49.2
I1206 08:39:01.517893 42853 certs.go:195] generating shared ca certs ...
I1206 08:39:01.517909 42853 certs.go:227] acquiring lock for ca certs: {Name:mkb7601b6e7349c8054e44623ead5840cbff8731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:39:01.518072 42853 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key
I1206 08:39:01.518123 42853 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key
I1206 08:39:01.518130 42853 certs.go:257] generating profile certs ...
I1206 08:39:01.518188 42853 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key
I1206 08:39:01.518199 42853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt with IP's: []
I1206 08:39:01.891340 42853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt ...
I1206 08:39:01.891357 42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.crt: {Name:mke1ec76aa123a8f6ce84cf3e07a24e13477f1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:39:01.891561 42853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key ...
I1206 08:39:01.891568 42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/client.key: {Name:mka00b3224bd4ccc89785c3a36f0add67caaa2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:39:01.891655 42853 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0
I1206 08:39:01.891667 42853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt.e2062ee0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1206 08:39:02.140827 42853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt.e2062ee0 ...
I1206 08:39:02.140854 42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt.e2062ee0: {Name:mk5d3e434d2ed04c59d8cd890b414cee687f2c8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:39:02.141038 42853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0 ...
I1206 08:39:02.141045 42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0: {Name:mkdb53fda8d1fb12536578975153ac76b8fcdeba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:39:02.141122 42853 certs.go:382] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt.e2062ee0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt
I1206 08:39:02.141205 42853 certs.go:386] copying /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key.e2062ee0 -> /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key
I1206 08:39:02.141257 42853 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key
I1206 08:39:02.141268 42853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt with IP's: []
I1206 08:39:02.450858 42853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt ...
I1206 08:39:02.450872 42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt: {Name:mk6e54e0a470699c5c89b212ebe3736aaa06cad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:39:02.451070 42853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key ...
I1206 08:39:02.451077 42853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key: {Name:mk8296563eb31ce160c7e5f8e2c09f3b0879cdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 08:39:02.451283 42853 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem (1338 bytes)
W1206 08:39:02.451321 42853 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292_empty.pem, impossibly tiny 0 bytes
I1206 08:39:02.451328 42853 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca-key.pem (1675 bytes)
I1206 08:39:02.451353 42853 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/ca.pem (1078 bytes)
I1206 08:39:02.451400 42853 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/cert.pem (1123 bytes)
I1206 08:39:02.451425 42853 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/certs/key.pem (1675 bytes)
I1206 08:39:02.451469 42853 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem (1708 bytes)
I1206 08:39:02.452045 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1206 08:39:02.472515 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1206 08:39:02.492103 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1206 08:39:02.510365 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1206 08:39:02.528770 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1206 08:39:02.547618 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1206 08:39:02.566394 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1206 08:39:02.584731 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/profiles/functional-090986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1206 08:39:02.603528 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1206 08:39:02.621625 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/certs/4292.pem --> /usr/share/ca-certificates/4292.pem (1338 bytes)
I1206 08:39:02.639932 42853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-2448/.minikube/files/etc/ssl/certs/42922.pem --> /usr/share/ca-certificates/42922.pem (1708 bytes)
I1206 08:39:02.657943 42853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1206 08:39:02.671605 42853 ssh_runner.go:195] Run: openssl version
I1206 08:39:02.677933 42853 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4292.pem
I1206 08:39:02.685517 42853 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4292.pem /etc/ssl/certs/4292.pem
I1206 08:39:02.693182 42853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4292.pem
I1206 08:39:02.696989 42853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 6 08:38 /usr/share/ca-certificates/4292.pem
I1206 08:39:02.697057 42853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4292.pem
I1206 08:39:02.738905 42853 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1206 08:39:02.746676 42853 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4292.pem /etc/ssl/certs/51391683.0
I1206 08:39:02.754291 42853 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42922.pem
I1206 08:39:02.761963 42853 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42922.pem /etc/ssl/certs/42922.pem
I1206 08:39:02.769900 42853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42922.pem
I1206 08:39:02.773765 42853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 6 08:38 /usr/share/ca-certificates/42922.pem
I1206 08:39:02.773819 42853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42922.pem
I1206 08:39:02.815217 42853 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1206 08:39:02.822845 42853 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42922.pem /etc/ssl/certs/3ec20f2e.0
I1206 08:39:02.830404 42853 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1206 08:39:02.837681 42853 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1206 08:39:02.845198 42853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1206 08:39:02.848890 42853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 6 08:29 /usr/share/ca-certificates/minikubeCA.pem
I1206 08:39:02.848945 42853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1206 08:39:02.889819 42853 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1206 08:39:02.897367 42853 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1206 08:39:02.905530 42853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1206 08:39:02.909101 42853 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1206 08:39:02.909145 42853 kubeadm.go:401] StartCluster: {Name:functional-090986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-090986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1206 08:39:02.909215 42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1206 08:39:02.909280 42853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1206 08:39:02.942248 42853 cri.go:89] found id: ""
I1206 08:39:02.942308 42853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1206 08:39:02.950218 42853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1206 08:39:02.958089 42853 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1206 08:39:02.958144 42853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1206 08:39:02.965936 42853 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1206 08:39:02.965945 42853 kubeadm.go:158] found existing configuration files:
I1206 08:39:02.966006 42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1206 08:39:02.974003 42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1206 08:39:02.974064 42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1206 08:39:02.981751 42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1206 08:39:02.991281 42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1206 08:39:02.991355 42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1206 08:39:03.002114 42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1206 08:39:03.011054 42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1206 08:39:03.011113 42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1206 08:39:03.019132 42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1206 08:39:03.027214 42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1206 08:39:03.027282 42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1206 08:39:03.035270 42853 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1206 08:39:03.142834 42853 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1206 08:39:03.143246 42853 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1206 08:39:03.222191 42853 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1206 08:43:07.123474 42853 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1206 08:43:07.123498 42853 kubeadm.go:319]
I1206 08:43:07.123717 42853 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1206 08:43:07.124435 42853 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1206 08:43:07.124487 42853 kubeadm.go:319] [preflight] Running pre-flight checks
I1206 08:43:07.124599 42853 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1206 08:43:07.124669 42853 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1206 08:43:07.124710 42853 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1206 08:43:07.124768 42853 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1206 08:43:07.124824 42853 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1206 08:43:07.124870 42853 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1206 08:43:07.124927 42853 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1206 08:43:07.124980 42853 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1206 08:43:07.125031 42853 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1206 08:43:07.125079 42853 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1206 08:43:07.125129 42853 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1206 08:43:07.125178 42853 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1206 08:43:07.125256 42853 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1206 08:43:07.125360 42853 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1206 08:43:07.125457 42853 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1206 08:43:07.125526 42853 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1206 08:43:07.128298 42853 out.go:252] - Generating certificates and keys ...
I1206 08:43:07.128406 42853 kubeadm.go:319] [certs] Using existing ca certificate authority
I1206 08:43:07.128471 42853 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1206 08:43:07.128541 42853 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1206 08:43:07.128601 42853 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1206 08:43:07.128661 42853 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1206 08:43:07.128710 42853 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1206 08:43:07.128762 42853 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1206 08:43:07.128940 42853 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-090986 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1206 08:43:07.129007 42853 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1206 08:43:07.129144 42853 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-090986 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1206 08:43:07.129213 42853 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1206 08:43:07.129276 42853 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1206 08:43:07.129326 42853 kubeadm.go:319] [certs] Generating "sa" key and public key
I1206 08:43:07.129381 42853 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1206 08:43:07.129434 42853 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1206 08:43:07.129500 42853 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1206 08:43:07.129554 42853 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1206 08:43:07.129624 42853 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1206 08:43:07.129678 42853 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1206 08:43:07.129769 42853 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1206 08:43:07.129835 42853 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1206 08:43:07.132901 42853 out.go:252] - Booting up control plane ...
I1206 08:43:07.133011 42853 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1206 08:43:07.133118 42853 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1206 08:43:07.133188 42853 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1206 08:43:07.133315 42853 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1206 08:43:07.133422 42853 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1206 08:43:07.133536 42853 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1206 08:43:07.133628 42853 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1206 08:43:07.133666 42853 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1206 08:43:07.133815 42853 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1206 08:43:07.133928 42853 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1206 08:43:07.134004 42853 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001310933s
I1206 08:43:07.134007 42853 kubeadm.go:319]
I1206 08:43:07.134063 42853 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1206 08:43:07.134102 42853 kubeadm.go:319] - The kubelet is not running
I1206 08:43:07.134208 42853 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1206 08:43:07.134211 42853 kubeadm.go:319]
I1206 08:43:07.134316 42853 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1206 08:43:07.134346 42853 kubeadm.go:319] - 'systemctl status kubelet'
I1206 08:43:07.134381 42853 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1206 08:43:07.134437 42853 kubeadm.go:319]
W1206 08:43:07.134514 42853 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-090986 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-090986 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001310933s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1206 08:43:07.134607 42853 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1206 08:43:07.545231 42853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1206 08:43:07.560225 42853 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1206 08:43:07.560282 42853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1206 08:43:07.568070 42853 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1206 08:43:07.568079 42853 kubeadm.go:158] found existing configuration files:
I1206 08:43:07.568127 42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1206 08:43:07.576165 42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1206 08:43:07.576222 42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1206 08:43:07.583697 42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1206 08:43:07.591686 42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1206 08:43:07.591747 42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1206 08:43:07.599236 42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1206 08:43:07.607046 42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1206 08:43:07.607104 42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1206 08:43:07.614591 42853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1206 08:43:07.622167 42853 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1206 08:43:07.622224 42853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1206 08:43:07.629628 42853 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1206 08:43:07.667752 42853 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1206 08:43:07.668031 42853 kubeadm.go:319] [preflight] Running pre-flight checks
I1206 08:43:07.745214 42853 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1206 08:43:07.745299 42853 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1206 08:43:07.745343 42853 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1206 08:43:07.745401 42853 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1206 08:43:07.745460 42853 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1206 08:43:07.745519 42853 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1206 08:43:07.745578 42853 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1206 08:43:07.745638 42853 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1206 08:43:07.745697 42853 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1206 08:43:07.745754 42853 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1206 08:43:07.745801 42853 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1206 08:43:07.745860 42853 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1206 08:43:07.821936 42853 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1206 08:43:07.822033 42853 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1206 08:43:07.822118 42853 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1206 08:43:07.828557 42853 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1206 08:43:07.834085 42853 out.go:252] - Generating certificates and keys ...
I1206 08:43:07.834169 42853 kubeadm.go:319] [certs] Using existing ca certificate authority
I1206 08:43:07.834233 42853 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1206 08:43:07.834308 42853 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1206 08:43:07.834367 42853 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1206 08:43:07.834435 42853 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1206 08:43:07.834488 42853 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1206 08:43:07.834554 42853 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1206 08:43:07.834614 42853 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1206 08:43:07.834687 42853 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1206 08:43:07.834767 42853 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1206 08:43:07.834870 42853 kubeadm.go:319] [certs] Using the existing "sa" key
I1206 08:43:07.834937 42853 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1206 08:43:08.278422 42853 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1206 08:43:08.539294 42853 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1206 08:43:08.582158 42853 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1206 08:43:08.680522 42853 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1206 08:43:08.962582 42853 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1206 08:43:08.963384 42853 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1206 08:43:08.966092 42853 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1206 08:43:08.969419 42853 out.go:252] - Booting up control plane ...
I1206 08:43:08.969548 42853 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1206 08:43:08.969646 42853 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1206 08:43:08.969728 42853 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1206 08:43:08.991741 42853 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1206 08:43:08.992711 42853 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1206 08:43:09.001100 42853 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1206 08:43:09.001418 42853 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1206 08:43:09.001702 42853 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1206 08:43:09.132563 42853 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1206 08:43:09.132703 42853 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1206 08:47:09.133344 42853 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001195219s
I1206 08:47:09.133363 42853 kubeadm.go:319]
I1206 08:47:09.133422 42853 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1206 08:47:09.133455 42853 kubeadm.go:319] - The kubelet is not running
I1206 08:47:09.133559 42853 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1206 08:47:09.133563 42853 kubeadm.go:319]
I1206 08:47:09.134080 42853 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1206 08:47:09.134142 42853 kubeadm.go:319] - 'systemctl status kubelet'
I1206 08:47:09.134340 42853 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1206 08:47:09.134344 42853 kubeadm.go:319]
I1206 08:47:09.140050 42853 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1206 08:47:09.140467 42853 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1206 08:47:09.140573 42853 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1206 08:47:09.140836 42853 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1206 08:47:09.140841 42853 kubeadm.go:319]
I1206 08:47:09.140910 42853 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1206 08:47:09.140963 42853 kubeadm.go:403] duration metric: took 8m6.231822508s to StartCluster
I1206 08:47:09.141009 42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1206 08:47:09.141070 42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1206 08:47:09.165494 42853 cri.go:89] found id: ""
I1206 08:47:09.165513 42853 logs.go:282] 0 containers: []
W1206 08:47:09.165520 42853 logs.go:284] No container was found matching "kube-apiserver"
I1206 08:47:09.165525 42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1206 08:47:09.165591 42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1206 08:47:09.189702 42853 cri.go:89] found id: ""
I1206 08:47:09.189715 42853 logs.go:282] 0 containers: []
W1206 08:47:09.189722 42853 logs.go:284] No container was found matching "etcd"
I1206 08:47:09.189727 42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1206 08:47:09.189789 42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1206 08:47:09.214580 42853 cri.go:89] found id: ""
I1206 08:47:09.214593 42853 logs.go:282] 0 containers: []
W1206 08:47:09.214601 42853 logs.go:284] No container was found matching "coredns"
I1206 08:47:09.214606 42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1206 08:47:09.214665 42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1206 08:47:09.241367 42853 cri.go:89] found id: ""
I1206 08:47:09.241392 42853 logs.go:282] 0 containers: []
W1206 08:47:09.241400 42853 logs.go:284] No container was found matching "kube-scheduler"
I1206 08:47:09.241406 42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1206 08:47:09.241510 42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1206 08:47:09.266821 42853 cri.go:89] found id: ""
I1206 08:47:09.266834 42853 logs.go:282] 0 containers: []
W1206 08:47:09.266841 42853 logs.go:284] No container was found matching "kube-proxy"
I1206 08:47:09.266846 42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1206 08:47:09.266903 42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1206 08:47:09.296050 42853 cri.go:89] found id: ""
I1206 08:47:09.296064 42853 logs.go:282] 0 containers: []
W1206 08:47:09.296071 42853 logs.go:284] No container was found matching "kube-controller-manager"
I1206 08:47:09.296077 42853 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1206 08:47:09.296136 42853 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1206 08:47:09.321390 42853 cri.go:89] found id: ""
I1206 08:47:09.321403 42853 logs.go:282] 0 containers: []
W1206 08:47:09.321410 42853 logs.go:284] No container was found matching "kindnet"
I1206 08:47:09.321429 42853 logs.go:123] Gathering logs for kubelet ...
I1206 08:47:09.321440 42853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1206 08:47:09.377851 42853 logs.go:123] Gathering logs for dmesg ...
I1206 08:47:09.377868 42853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1206 08:47:09.388816 42853 logs.go:123] Gathering logs for describe nodes ...
I1206 08:47:09.388830 42853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1206 08:47:09.453833 42853 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1206 08:47:09.445536 4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 08:47:09.446397 4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 08:47:09.448143 4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 08:47:09.448438 4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 08:47:09.449920 4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1206 08:47:09.445536 4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 08:47:09.446397 4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 08:47:09.448143 4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 08:47:09.448438 4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 08:47:09.449920 4813 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1206 08:47:09.453844 42853 logs.go:123] Gathering logs for containerd ...
I1206 08:47:09.453854 42853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1206 08:47:09.495854 42853 logs.go:123] Gathering logs for container status ...
I1206 08:47:09.495873 42853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1206 08:47:09.530262 42853 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001195219s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1206 08:47:09.530305 42853 out.go:285] *
W1206 08:47:09.530365 42853 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001195219s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1206 08:47:09.530374 42853 out.go:285] *
W1206 08:47:09.532520 42853 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1206 08:47:09.537665 42853 out.go:203]
W1206 08:47:09.539878 42853 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001195219s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1206 08:47:09.539930 42853 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1206 08:47:09.539951 42853 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1206 08:47:09.543310 42853 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.041993520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042066947Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042175402Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042246475Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042319812Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042380497Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042438680Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042525219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042598901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.042691840Z" level=info msg="Connect containerd service"
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.043054055Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.043778394Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.060430607Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.060537618Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.061094827Z" level=info msg="Start subscribing containerd event"
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.061161748Z" level=info msg="Start recovering state"
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104257792Z" level=info msg="Start event monitor"
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104326009Z" level=info msg="Start cni network conf syncer for default"
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104335543Z" level=info msg="Start streaming server"
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104344922Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104353792Z" level=info msg="runtime interface starting up..."
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104360930Z" level=info msg="starting plugins..."
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.104373861Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 06 08:39:01 functional-090986 systemd[1]: Started containerd.service - containerd container runtime.
Dec 06 08:39:01 functional-090986 containerd[763]: time="2025-12-06T08:39:01.106555067Z" level=info msg="containerd successfully booted in 0.091825s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1206 08:47:10.554438 4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 08:47:10.554872 4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 08:47:10.556349 4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 08:47:10.556677 4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 08:47:10.558095 4936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec 6 08:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.944603] kauditd_printk_skb: 39 callbacks suppressed
==> kernel <==
08:47:10 up 29 min, 0 user, load average: 0.30, 0.55, 0.74
Linux functional-090986 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 06 08:47:07 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 08:47:07 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 06 08:47:07 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 08:47:07 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 08:47:08 functional-090986 kubelet[4739]: E1206 08:47:08.025431 4739 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 06 08:47:08 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:47:08 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 08:47:08 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 06 08:47:08 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 08:47:08 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 08:47:08 functional-090986 kubelet[4744]: E1206 08:47:08.767550 4744 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 06 08:47:08 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:47:08 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 08:47:09 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 06 08:47:09 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 08:47:09 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 08:47:09 functional-090986 kubelet[4819]: E1206 08:47:09.533262 4819 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 06 08:47:09 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:47:09 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 08:47:10 functional-090986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 06 08:47:10 functional-090986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 08:47:10 functional-090986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 08:47:10 functional-090986 kubelet[4858]: E1206 08:47:10.279803 4858 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 06 08:47:10 functional-090986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:47:10 functional-090986 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-090986 -n functional-090986: exit status 6 (356.980828ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1206 08:47:11.030285 48605 status.go:458] kubeconfig endpoint: get endpoint: "functional-090986" does not appear in /home/jenkins/minikube-integration/22049-2448/kubeconfig
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-090986" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.99s)