=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1206 10:24:34.267057 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:25:01.971698 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.580178 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.586663 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.598238 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.619774 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.661226 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.742725 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:23.904390 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:24.226095 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:24.868498 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:26.150301 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:28.711770 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:33.833573 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:26:44.075836 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:27:04.557224 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:27:45.518650 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:29:07.440080 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-095547/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:29:34.267532 296532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/addons-958450/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m18.534737394s)
-- stdout --
* [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22047
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-147194" primary control-plane node in "functional-147194" cluster
* Pulling base image v0.0.48-1764843390-22032 ...
* Found network options:
- HTTP_PROXY=localhost:40975
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:40975 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-147194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-147194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001001783s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000229763s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000229763s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-147194
helpers_test.go:243: (dbg) docker inspect functional-147194:
-- stdout --
[
{
"Id": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
"Created": "2025-12-06T10:22:24.491423296Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 335508,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-06T10:22:24.552981626Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
"ResolvConfPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hostname",
"HostsPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/hosts",
"LogPath": "/var/lib/docker/containers/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b/4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b-json.log",
"Name": "/functional-147194",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-147194:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-147194",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "4de95606394dd7d61399026147354806122946ccf675206ed3c2406c1c038a1b",
"LowerDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0-init/diff:/var/lib/docker/overlay2/73a259cc89fa924872b7332aa52e997ae0d529069bc65eb2678a6164801eb129/diff",
"MergedDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/merged",
"UpperDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/diff",
"WorkDir": "/var/lib/docker/overlay2/f2347d25e6807cf39dd0d9c5ed4ed9e526ee50bd128e8668c9b947696ca3f8a0/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-147194",
"Source": "/var/lib/docker/volumes/functional-147194/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-147194",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-147194",
"name.minikube.sigs.k8s.io": "functional-147194",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "16b25e2220753c81b6a41ed3df741445b699f4c9fd41873d886a9d2ae9c5870d",
"SandboxKey": "/var/run/docker/netns/16b25e222075",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33128"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33129"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33132"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33130"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33131"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-147194": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "2a:4f:2f:7e:2e:63",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "6cc427a57ca3898cf6f76d919b64cfdf56e2f9354a43ff87ab796f27f387922a",
"EndpointID": "8ec508a314b87c391ad1ef7b4161375f99fe689791e0717ecfbd9b6fa99920e3",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-147194",
"4de95606394d"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-147194 -n functional-147194: exit status 6 (324.539788ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1206 10:30:38.250007 340595 status.go:458] kubeconfig endpoint: get endpoint: "functional-147194" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p functional-147194 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-095547 ssh -n functional-095547 sudo cat /home/docker/cp-test.txt │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image load --daemon kicbase/echo-server:functional-095547 --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ cp │ functional-095547 cp functional-095547:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1142755289/001/cp-test.txt │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ ssh │ functional-095547 ssh -n functional-095547 sudo cat /home/docker/cp-test.txt │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ cp │ functional-095547 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image ls │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ ssh │ functional-095547 ssh -n functional-095547 sudo cat /tmp/does/not/exist/cp-test.txt │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image save kicbase/echo-server:functional-095547 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image rm kicbase/echo-server:functional-095547 --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image ls │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image ls │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image save --daemon kicbase/echo-server:functional-095547 --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ update-context │ functional-095547 update-context --alsologtostderr -v=2 │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ update-context │ functional-095547 update-context --alsologtostderr -v=2 │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ update-context │ functional-095547 update-context --alsologtostderr -v=2 │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image ls --format short --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image ls --format yaml --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ ssh │ functional-095547 ssh pgrep buildkitd │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ │
│ image │ functional-095547 image ls --format json --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image build -t localhost/my-image:functional-095547 testdata/build --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image ls --format table --alsologtostderr │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ image │ functional-095547 image ls │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ delete │ -p functional-095547 │ functional-095547 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ 06 Dec 25 10:22 UTC │
│ start │ -p functional-147194 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-147194 │ jenkins │ v1.37.0 │ 06 Dec 25 10:22 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/06 10:22:19
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.25.3 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1206 10:22:19.422476 335120 out.go:360] Setting OutFile to fd 1 ...
I1206 10:22:19.422570 335120 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:22:19.422574 335120 out.go:374] Setting ErrFile to fd 2...
I1206 10:22:19.422578 335120 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:22:19.422912 335120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-294672/.minikube/bin
I1206 10:22:19.423358 335120 out.go:368] Setting JSON to false
I1206 10:22:19.424204 335120 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11091,"bootTime":1765005449,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I1206 10:22:19.424277 335120 start.go:143] virtualization:
I1206 10:22:19.428077 335120 out.go:179] * [functional-147194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1206 10:22:19.431984 335120 out.go:179] - MINIKUBE_LOCATION=22047
I1206 10:22:19.432246 335120 notify.go:221] Checking for updates...
I1206 10:22:19.438103 335120 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1206 10:22:19.441030 335120 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22047-294672/kubeconfig
I1206 10:22:19.443929 335120 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-294672/.minikube
I1206 10:22:19.446930 335120 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1206 10:22:19.449719 335120 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1206 10:22:19.452812 335120 driver.go:422] Setting default libvirt URI to qemu:///system
I1206 10:22:19.482524 335120 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1206 10:22:19.482671 335120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 10:22:19.539185 335120 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-06 10:22:19.52939381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 10:22:19.539286 335120 docker.go:319] overlay module found
I1206 10:22:19.542447 335120 out.go:179] * Using the docker driver based on user configuration
I1206 10:22:19.545341 335120 start.go:309] selected driver: docker
I1206 10:22:19.545351 335120 start.go:927] validating driver "docker" against <nil>
I1206 10:22:19.545363 335120 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1206 10:22:19.546087 335120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 10:22:19.602733 335120 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-06 10:22:19.593958312 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 10:22:19.602872 335120 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1206 10:22:19.603106 335120 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1206 10:22:19.605946 335120 out.go:179] * Using Docker driver with root privileges
I1206 10:22:19.608707 335120 cni.go:84] Creating CNI manager for ""
I1206 10:22:19.608770 335120 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1206 10:22:19.608776 335120 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1206 10:22:19.608846 335120 start.go:353] cluster config:
{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1206 10:22:19.613865 335120 out.go:179] * Starting "functional-147194" primary control-plane node in "functional-147194" cluster
I1206 10:22:19.616597 335120 cache.go:134] Beginning downloading kic base image for docker with containerd
I1206 10:22:19.619460 335120 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
I1206 10:22:19.622165 335120 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1206 10:22:19.622217 335120 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
I1206 10:22:19.622224 335120 cache.go:65] Caching tarball of preloaded images
I1206 10:22:19.622226 335120 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
I1206 10:22:19.622310 335120 preload.go:238] Found /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1206 10:22:19.622319 335120 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
I1206 10:22:19.622658 335120 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
I1206 10:22:19.622680 335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json: {Name:mk73987bed89b772f8aa22479ceb68dfc6f91d86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 10:22:19.643112 335120 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
I1206 10:22:19.643123 335120 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
I1206 10:22:19.643142 335120 cache.go:243] Successfully downloaded all kic artifacts
I1206 10:22:19.643173 335120 start.go:360] acquireMachinesLock for functional-147194: {Name:mk265fe2add24433bfa1f7ef4d3154f23fa1281a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1206 10:22:19.643283 335120 start.go:364] duration metric: took 94.877µs to acquireMachinesLock for "functional-147194"
I1206 10:22:19.643307 335120 start.go:93] Provisioning new machine with config: &{Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1206 10:22:19.643372 335120 start.go:125] createHost starting for "" (driver="docker")
I1206 10:22:19.646629 335120 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1206 10:22:19.646914 335120 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:40975 to docker env.
I1206 10:22:19.646940 335120 start.go:159] libmachine.API.Create for "functional-147194" (driver="docker")
I1206 10:22:19.646962 335120 client.go:173] LocalClient.Create starting
I1206 10:22:19.647034 335120 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem
I1206 10:22:19.647071 335120 main.go:143] libmachine: Decoding PEM data...
I1206 10:22:19.647087 335120 main.go:143] libmachine: Parsing certificate...
I1206 10:22:19.647146 335120 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem
I1206 10:22:19.647161 335120 main.go:143] libmachine: Decoding PEM data...
I1206 10:22:19.647171 335120 main.go:143] libmachine: Parsing certificate...
I1206 10:22:19.647519 335120 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1206 10:22:19.663454 335120 cli_runner.go:211] docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1206 10:22:19.663539 335120 network_create.go:284] running [docker network inspect functional-147194] to gather additional debugging logs...
I1206 10:22:19.663554 335120 cli_runner.go:164] Run: docker network inspect functional-147194
W1206 10:22:19.677838 335120 cli_runner.go:211] docker network inspect functional-147194 returned with exit code 1
I1206 10:22:19.677856 335120 network_create.go:287] error running [docker network inspect functional-147194]: docker network inspect functional-147194: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-147194 not found
I1206 10:22:19.677867 335120 network_create.go:289] output of [docker network inspect functional-147194]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-147194 not found
** /stderr **
I1206 10:22:19.677959 335120 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 10:22:19.694463 335120 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400191d720}
I1206 10:22:19.694493 335120 network_create.go:124] attempt to create docker network functional-147194 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1206 10:22:19.694547 335120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-147194 functional-147194
I1206 10:22:19.756806 335120 network_create.go:108] docker network functional-147194 192.168.49.0/24 created
I1206 10:22:19.756825 335120 kic.go:121] calculated static IP "192.168.49.2" for the "functional-147194" container
I1206 10:22:19.756897 335120 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1206 10:22:19.772066 335120 cli_runner.go:164] Run: docker volume create functional-147194 --label name.minikube.sigs.k8s.io=functional-147194 --label created_by.minikube.sigs.k8s.io=true
I1206 10:22:19.794908 335120 oci.go:103] Successfully created a docker volume functional-147194
I1206 10:22:19.795013 335120 cli_runner.go:164] Run: docker run --rm --name functional-147194-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-147194 --entrypoint /usr/bin/test -v functional-147194:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
I1206 10:22:20.341564 335120 oci.go:107] Successfully prepared a docker volume functional-147194
I1206 10:22:20.341624 335120 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1206 10:22:20.341634 335120 kic.go:194] Starting extracting preloaded images to volume ...
I1206 10:22:20.341699 335120 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-147194:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
I1206 10:22:24.421377 335120 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-294672/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-147194:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.079642986s)
I1206 10:22:24.421399 335120 kic.go:203] duration metric: took 4.079763447s to extract preloaded images to volume ...
W1206 10:22:24.421553 335120 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1206 10:22:24.421659 335120 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1206 10:22:24.477109 335120 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-147194 --name functional-147194 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-147194 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-147194 --network functional-147194 --ip 192.168.49.2 --volume functional-147194:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
I1206 10:22:24.766117 335120 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Running}}
I1206 10:22:24.789327 335120 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
I1206 10:22:24.811486 335120 cli_runner.go:164] Run: docker exec functional-147194 stat /var/lib/dpkg/alternatives/iptables
I1206 10:22:24.863818 335120 oci.go:144] the created container "functional-147194" has a running status.
I1206 10:22:24.863849 335120 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa...
I1206 10:22:25.441640 335120 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1206 10:22:25.467839 335120 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
I1206 10:22:25.495691 335120 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1206 10:22:25.495713 335120 kic_runner.go:114] Args: [docker exec --privileged functional-147194 chown docker:docker /home/docker/.ssh/authorized_keys]
I1206 10:22:25.562380 335120 cli_runner.go:164] Run: docker container inspect functional-147194 --format={{.State.Status}}
I1206 10:22:25.585705 335120 machine.go:94] provisionDockerMachine start ...
I1206 10:22:25.585783 335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:22:25.614038 335120 main.go:143] libmachine: Using SSH client type: native
I1206 10:22:25.614392 335120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 33128 <nil> <nil>}
I1206 10:22:25.614399 335120 main.go:143] libmachine: About to run SSH command:
hostname
I1206 10:22:25.784515 335120 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
I1206 10:22:25.784530 335120 ubuntu.go:182] provisioning hostname "functional-147194"
I1206 10:22:25.784594 335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:22:25.807186 335120 main.go:143] libmachine: Using SSH client type: native
I1206 10:22:25.807488 335120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 33128 <nil> <nil>}
I1206 10:22:25.807497 335120 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-147194 && echo "functional-147194" | sudo tee /etc/hostname
I1206 10:22:25.978322 335120 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-147194
I1206 10:22:25.978414 335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:22:25.995370 335120 main.go:143] libmachine: Using SSH client type: native
I1206 10:22:25.995684 335120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 33128 <nil> <nil>}
I1206 10:22:25.995699 335120 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-147194' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-147194/g' /etc/hosts;
else
echo '127.0.1.1 functional-147194' | sudo tee -a /etc/hosts;
fi
fi
I1206 10:22:26.149524 335120 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1206 10:22:26.149541 335120 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-294672/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-294672/.minikube}
I1206 10:22:26.149559 335120 ubuntu.go:190] setting up certificates
I1206 10:22:26.149566 335120 provision.go:84] configureAuth start
I1206 10:22:26.149644 335120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
I1206 10:22:26.167147 335120 provision.go:143] copyHostCerts
I1206 10:22:26.167209 335120 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem, removing ...
I1206 10:22:26.167216 335120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem
I1206 10:22:26.167296 335120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/ca.pem (1082 bytes)
I1206 10:22:26.167389 335120 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem, removing ...
I1206 10:22:26.167393 335120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem
I1206 10:22:26.167419 335120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/cert.pem (1123 bytes)
I1206 10:22:26.167467 335120 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem, removing ...
I1206 10:22:26.167470 335120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem
I1206 10:22:26.167492 335120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-294672/.minikube/key.pem (1675 bytes)
I1206 10:22:26.167535 335120 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem org=jenkins.functional-147194 san=[127.0.0.1 192.168.49.2 functional-147194 localhost minikube]
I1206 10:22:26.373540 335120 provision.go:177] copyRemoteCerts
I1206 10:22:26.373600 335120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1206 10:22:26.373639 335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:22:26.390451 335120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
I1206 10:22:26.496287 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1206 10:22:26.512927 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1206 10:22:26.530228 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1206 10:22:26.547720 335120 provision.go:87] duration metric: took 398.131111ms to configureAuth
I1206 10:22:26.547738 335120 ubuntu.go:206] setting minikube options for container-runtime
I1206 10:22:26.547939 335120 config.go:182] Loaded profile config "functional-147194": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1206 10:22:26.547945 335120 machine.go:97] duration metric: took 962.229408ms to provisionDockerMachine
I1206 10:22:26.547950 335120 client.go:176] duration metric: took 6.900984023s to LocalClient.Create
I1206 10:22:26.547973 335120 start.go:167] duration metric: took 6.901033788s to libmachine.API.Create "functional-147194"
I1206 10:22:26.547980 335120 start.go:293] postStartSetup for "functional-147194" (driver="docker")
I1206 10:22:26.547991 335120 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1206 10:22:26.548048 335120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1206 10:22:26.548093 335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:22:26.565705 335120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
I1206 10:22:26.673064 335120 ssh_runner.go:195] Run: cat /etc/os-release
I1206 10:22:26.676315 335120 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1206 10:22:26.676333 335120 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1206 10:22:26.676343 335120 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/addons for local assets ...
I1206 10:22:26.676398 335120 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-294672/.minikube/files for local assets ...
I1206 10:22:26.676485 335120 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem -> 2965322.pem in /etc/ssl/certs
I1206 10:22:26.676564 335120 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts -> hosts in /etc/test/nested/copy/296532
I1206 10:22:26.676607 335120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/296532
I1206 10:22:26.684027 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /etc/ssl/certs/2965322.pem (1708 bytes)
I1206 10:22:26.700941 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/test/nested/copy/296532/hosts --> /etc/test/nested/copy/296532/hosts (40 bytes)
I1206 10:22:26.718526 335120 start.go:296] duration metric: took 170.53193ms for postStartSetup
I1206 10:22:26.718884 335120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
I1206 10:22:26.735466 335120 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/config.json ...
I1206 10:22:26.735728 335120 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1206 10:22:26.735774 335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:22:26.756166 335120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
I1206 10:22:26.857709 335120 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1206 10:22:26.861896 335120 start.go:128] duration metric: took 7.218511584s to createHost
I1206 10:22:26.861910 335120 start.go:83] releasing machines lock for "functional-147194", held for 7.218620919s
I1206 10:22:26.861987 335120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-147194
I1206 10:22:26.882835 335120 out.go:179] * Found network options:
I1206 10:22:26.885692 335120 out.go:179] - HTTP_PROXY=localhost:40975
W1206 10:22:26.888476 335120 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1206 10:22:26.891311 335120 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1206 10:22:26.894172 335120 ssh_runner.go:195] Run: cat /version.json
I1206 10:22:26.894218 335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:22:26.894241 335120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1206 10:22:26.894298 335120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-147194
I1206 10:22:26.916873 335120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
I1206 10:22:26.922423 335120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22047-294672/.minikube/machines/functional-147194/id_rsa Username:docker}
I1206 10:22:27.017278 335120 ssh_runner.go:195] Run: systemctl --version
I1206 10:22:27.113933 335120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1206 10:22:27.118177 335120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1206 10:22:27.118242 335120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1206 10:22:27.146635 335120 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1206 10:22:27.146649 335120 start.go:496] detecting cgroup driver to use...
I1206 10:22:27.146680 335120 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1206 10:22:27.146736 335120 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1206 10:22:27.162258 335120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1206 10:22:27.175754 335120 docker.go:218] disabling cri-docker service (if available) ...
I1206 10:22:27.175822 335120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1206 10:22:27.193664 335120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1206 10:22:27.212585 335120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1206 10:22:27.326656 335120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1206 10:22:27.451443 335120 docker.go:234] disabling docker service ...
I1206 10:22:27.451500 335120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1206 10:22:27.473138 335120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1206 10:22:27.487152 335120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1206 10:22:27.607229 335120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1206 10:22:27.727320 335120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1206 10:22:27.741038 335120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1206 10:22:27.754780 335120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1206 10:22:27.763381 335120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1206 10:22:27.772116 335120 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1206 10:22:27.772176 335120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1206 10:22:27.780974 335120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1206 10:22:27.789760 335120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1206 10:22:27.798220 335120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1206 10:22:27.806621 335120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1206 10:22:27.814434 335120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1206 10:22:27.822936 335120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1206 10:22:27.831242 335120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1206 10:22:27.840508 335120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1206 10:22:27.847869 335120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1206 10:22:27.855139 335120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1206 10:22:27.978043 335120 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1206 10:22:28.128038 335120 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1206 10:22:28.128115 335120 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1206 10:22:28.132144 335120 start.go:564] Will wait 60s for crictl version
I1206 10:22:28.132202 335120 ssh_runner.go:195] Run: which crictl
I1206 10:22:28.135844 335120 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1206 10:22:28.160067 335120 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1206 10:22:28.160142 335120 ssh_runner.go:195] Run: containerd --version
I1206 10:22:28.181858 335120 ssh_runner.go:195] Run: containerd --version
I1206 10:22:28.207778 335120 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
I1206 10:22:28.210721 335120 cli_runner.go:164] Run: docker network inspect functional-147194 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 10:22:28.231135 335120 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1206 10:22:28.234894 335120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1206 10:22:28.244396 335120 kubeadm.go:884] updating cluster {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1206 10:22:28.244500 335120 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1206 10:22:28.244560 335120 ssh_runner.go:195] Run: sudo crictl images --output json
I1206 10:22:28.268904 335120 containerd.go:627] all images are preloaded for containerd runtime.
I1206 10:22:28.268916 335120 containerd.go:534] Images already preloaded, skipping extraction
I1206 10:22:28.268977 335120 ssh_runner.go:195] Run: sudo crictl images --output json
I1206 10:22:28.297675 335120 containerd.go:627] all images are preloaded for containerd runtime.
I1206 10:22:28.297701 335120 cache_images.go:86] Images are preloaded, skipping loading
I1206 10:22:28.297708 335120 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1206 10:22:28.297807 335120 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-147194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1206 10:22:28.297872 335120 ssh_runner.go:195] Run: sudo crictl info
I1206 10:22:28.332614 335120 cni.go:84] Creating CNI manager for ""
I1206 10:22:28.332627 335120 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1206 10:22:28.332643 335120 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1206 10:22:28.332664 335120 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-147194 NodeName:functional-147194 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1206 10:22:28.332778 335120 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-147194"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1206 10:22:28.332844 335120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1206 10:22:28.340530 335120 binaries.go:51] Found k8s binaries, skipping transfer
I1206 10:22:28.340592 335120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1206 10:22:28.348118 335120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1206 10:22:28.360612 335120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1206 10:22:28.373713 335120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1206 10:22:28.386721 335120 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1206 10:22:28.390626 335120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1206 10:22:28.400112 335120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1206 10:22:28.515111 335120 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1206 10:22:28.530990 335120 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194 for IP: 192.168.49.2
I1206 10:22:28.531001 335120 certs.go:195] generating shared ca certs ...
I1206 10:22:28.531015 335120 certs.go:227] acquiring lock for ca certs: {Name:mk8d13f1011e7785caff4e4d38ff250fc5bed9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 10:22:28.531153 335120 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key
I1206 10:22:28.531205 335120 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key
I1206 10:22:28.531211 335120 certs.go:257] generating profile certs ...
I1206 10:22:28.531263 335120 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key
I1206 10:22:28.531273 335120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt with IP's: []
I1206 10:22:29.371778 335120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt ...
I1206 10:22:29.371794 335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.crt: {Name:mk9578f9194ea7166348e6f3b5ebb8bfda626d17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 10:22:29.371969 335120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key ...
I1206 10:22:29.371975 335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/client.key: {Name:mk3400a43c3e5df71cafe2cf04621f47451db229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 10:22:29.372055 335120 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0
I1206 10:22:29.372066 335120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt.85bf0fb0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1206 10:22:29.574021 335120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt.85bf0fb0 ...
I1206 10:22:29.574035 335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt.85bf0fb0: {Name:mkf36890f699db70c95860fab7a3db99814af28c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 10:22:29.574235 335120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0 ...
I1206 10:22:29.574243 335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0: {Name:mkcb30f188f2ef895fe80015a77c8f4c87b51806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 10:22:29.574335 335120 certs.go:382] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt.85bf0fb0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt
I1206 10:22:29.574407 335120 certs.go:386] copying /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key.85bf0fb0 -> /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key
I1206 10:22:29.574458 335120 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key
I1206 10:22:29.574470 335120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt with IP's: []
I1206 10:22:29.974032 335120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt ...
I1206 10:22:29.974047 335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt: {Name:mk19b468c48d979b6a8ac75d6f4671b927bb4c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 10:22:29.974228 335120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key ...
I1206 10:22:29.974236 335120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key: {Name:mk4f0a47e35bd4c619f64ca5ddeacd0606823992 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 10:22:29.974473 335120 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem (1338 bytes)
W1206 10:22:29.974513 335120 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532_empty.pem, impossibly tiny 0 bytes
I1206 10:22:29.974520 335120 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca-key.pem (1675 bytes)
I1206 10:22:29.974546 335120 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/ca.pem (1082 bytes)
I1206 10:22:29.974572 335120 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/cert.pem (1123 bytes)
I1206 10:22:29.974596 335120 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/certs/key.pem (1675 bytes)
I1206 10:22:29.974639 335120 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem (1708 bytes)
I1206 10:22:29.975251 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1206 10:22:29.994337 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1206 10:22:30.025318 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1206 10:22:30.051339 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1206 10:22:30.075257 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1206 10:22:30.095162 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1206 10:22:30.114778 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1206 10:22:30.138884 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/profiles/functional-147194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1206 10:22:30.158133 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/files/etc/ssl/certs/2965322.pem --> /usr/share/ca-certificates/2965322.pem (1708 bytes)
I1206 10:22:30.177101 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1206 10:22:30.196544 335120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-294672/.minikube/certs/296532.pem --> /usr/share/ca-certificates/296532.pem (1338 bytes)
I1206 10:22:30.215344 335120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1206 10:22:30.228661 335120 ssh_runner.go:195] Run: openssl version
I1206 10:22:30.235131 335120 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2965322.pem
I1206 10:22:30.242613 335120 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2965322.pem /etc/ssl/certs/2965322.pem
I1206 10:22:30.250172 335120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2965322.pem
I1206 10:22:30.253893 335120 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 6 10:22 /usr/share/ca-certificates/2965322.pem
I1206 10:22:30.253948 335120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2965322.pem
I1206 10:22:30.295583 335120 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1206 10:22:30.303057 335120 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2965322.pem /etc/ssl/certs/3ec20f2e.0
I1206 10:22:30.310245 335120 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1206 10:22:30.317493 335120 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1206 10:22:30.324877 335120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1206 10:22:30.328487 335120 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 6 10:13 /usr/share/ca-certificates/minikubeCA.pem
I1206 10:22:30.328548 335120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1206 10:22:30.369574 335120 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1206 10:22:30.376811 335120 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1206 10:22:30.383899 335120 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/296532.pem
I1206 10:22:30.391252 335120 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/296532.pem /etc/ssl/certs/296532.pem
I1206 10:22:30.398692 335120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/296532.pem
I1206 10:22:30.402552 335120 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 6 10:22 /usr/share/ca-certificates/296532.pem
I1206 10:22:30.402605 335120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/296532.pem
I1206 10:22:30.443723 335120 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1206 10:22:30.451241 335120 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/296532.pem /etc/ssl/certs/51391683.0
I1206 10:22:30.458667 335120 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1206 10:22:30.462159 335120 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1206 10:22:30.462200 335120 kubeadm.go:401] StartCluster: {Name:functional-147194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-147194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1206 10:22:30.462265 335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1206 10:22:30.462363 335120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1206 10:22:30.491289 335120 cri.go:89] found id: ""
I1206 10:22:30.491347 335120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1206 10:22:30.499136 335120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1206 10:22:30.506935 335120 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1206 10:22:30.506988 335120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1206 10:22:30.514759 335120 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1206 10:22:30.514768 335120 kubeadm.go:158] found existing configuration files:
I1206 10:22:30.514821 335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1206 10:22:30.522509 335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1206 10:22:30.522566 335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1206 10:22:30.529890 335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1206 10:22:30.537848 335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1206 10:22:30.537902 335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1206 10:22:30.545175 335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1206 10:22:30.552802 335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1206 10:22:30.552868 335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1206 10:22:30.560105 335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1206 10:22:30.573268 335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1206 10:22:30.573322 335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1206 10:22:30.580704 335120 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1206 10:22:30.616798 335120 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1206 10:22:30.616851 335120 kubeadm.go:319] [preflight] Running pre-flight checks
I1206 10:22:30.685495 335120 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1206 10:22:30.685565 335120 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1206 10:22:30.685599 335120 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1206 10:22:30.685641 335120 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1206 10:22:30.685688 335120 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1206 10:22:30.685738 335120 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1206 10:22:30.685812 335120 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1206 10:22:30.685874 335120 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1206 10:22:30.685925 335120 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1206 10:22:30.685970 335120 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1206 10:22:30.686017 335120 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1206 10:22:30.686061 335120 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1206 10:22:30.764750 335120 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1206 10:22:30.764860 335120 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1206 10:22:30.764968 335120 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1206 10:22:30.777540 335120 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1206 10:22:30.783705 335120 out.go:252] - Generating certificates and keys ...
I1206 10:22:30.783800 335120 kubeadm.go:319] [certs] Using existing ca certificate authority
I1206 10:22:30.783865 335120 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1206 10:22:31.409111 335120 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1206 10:22:31.498727 335120 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1206 10:22:31.859520 335120 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1206 10:22:32.068937 335120 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1206 10:22:32.284925 335120 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1206 10:22:32.285074 335120 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-147194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1206 10:22:32.376623 335120 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1206 10:22:32.376969 335120 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-147194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1206 10:22:32.884254 335120 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1206 10:22:33.183480 335120 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1206 10:22:33.383365 335120 kubeadm.go:319] [certs] Generating "sa" key and public key
I1206 10:22:33.383655 335120 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1206 10:22:33.636718 335120 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1206 10:22:33.692035 335120 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1206 10:22:33.957214 335120 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1206 10:22:34.174887 335120 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1206 10:22:34.500589 335120 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1206 10:22:34.501301 335120 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1206 10:22:34.504198 335120 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1206 10:22:34.507589 335120 out.go:252] - Booting up control plane ...
I1206 10:22:34.507692 335120 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1206 10:22:34.507774 335120 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1206 10:22:34.507839 335120 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1206 10:22:34.536922 335120 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1206 10:22:34.537263 335120 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1206 10:22:34.545482 335120 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1206 10:22:34.546187 335120 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1206 10:22:34.547254 335120 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1206 10:22:34.679460 335120 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1206 10:22:34.679572 335120 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1206 10:26:34.680431 335120 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001001783s
I1206 10:26:34.680457 335120 kubeadm.go:319]
I1206 10:26:34.680511 335120 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1206 10:26:34.680541 335120 kubeadm.go:319] - The kubelet is not running
I1206 10:26:34.680640 335120 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1206 10:26:34.680647 335120 kubeadm.go:319]
I1206 10:26:34.680745 335120 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1206 10:26:34.680774 335120 kubeadm.go:319] - 'systemctl status kubelet'
I1206 10:26:34.680802 335120 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1206 10:26:34.680805 335120 kubeadm.go:319]
I1206 10:26:34.685577 335120 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1206 10:26:34.686026 335120 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1206 10:26:34.686168 335120 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1206 10:26:34.686406 335120 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1206 10:26:34.686410 335120 kubeadm.go:319]
I1206 10:26:34.686478 335120 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1206 10:26:34.686614 335120 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-147194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-147194 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001001783s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1206 10:26:34.686706 335120 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1206 10:26:35.104599 335120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1206 10:26:35.119030 335120 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1206 10:26:35.119084 335120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1206 10:26:35.127345 335120 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1206 10:26:35.127353 335120 kubeadm.go:158] found existing configuration files:
I1206 10:26:35.127407 335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1206 10:26:35.135226 335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1206 10:26:35.135285 335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1206 10:26:35.143334 335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1206 10:26:35.151362 335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1206 10:26:35.151415 335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1206 10:26:35.158942 335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1206 10:26:35.166795 335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1206 10:26:35.166853 335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1206 10:26:35.174931 335120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1206 10:26:35.182789 335120 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1206 10:26:35.182855 335120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1206 10:26:35.190054 335120 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1206 10:26:35.234261 335120 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1206 10:26:35.234589 335120 kubeadm.go:319] [preflight] Running pre-flight checks
I1206 10:26:35.302325 335120 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1206 10:26:35.302386 335120 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1206 10:26:35.302420 335120 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1206 10:26:35.302461 335120 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1206 10:26:35.302505 335120 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1206 10:26:35.302555 335120 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1206 10:26:35.302599 335120 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1206 10:26:35.302643 335120 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1206 10:26:35.302687 335120 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1206 10:26:35.302729 335120 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1206 10:26:35.302773 335120 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1206 10:26:35.302815 335120 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1206 10:26:35.374503 335120 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1206 10:26:35.374624 335120 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1206 10:26:35.374727 335120 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1206 10:26:35.379739 335120 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1206 10:26:35.385177 335120 out.go:252] - Generating certificates and keys ...
I1206 10:26:35.385257 335120 kubeadm.go:319] [certs] Using existing ca certificate authority
I1206 10:26:35.385321 335120 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1206 10:26:35.385396 335120 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1206 10:26:35.385455 335120 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1206 10:26:35.385523 335120 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1206 10:26:35.385575 335120 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1206 10:26:35.385637 335120 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1206 10:26:35.385697 335120 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1206 10:26:35.385770 335120 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1206 10:26:35.385848 335120 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1206 10:26:35.385885 335120 kubeadm.go:319] [certs] Using the existing "sa" key
I1206 10:26:35.385939 335120 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1206 10:26:35.685326 335120 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1206 10:26:36.061349 335120 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1206 10:26:36.340926 335120 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1206 10:26:36.935790 335120 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1206 10:26:37.329824 335120 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1206 10:26:37.330339 335120 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1206 10:26:37.334804 335120 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1206 10:26:37.338092 335120 out.go:252] - Booting up control plane ...
I1206 10:26:37.338191 335120 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1206 10:26:37.338271 335120 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1206 10:26:37.338338 335120 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1206 10:26:37.357307 335120 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1206 10:26:37.357410 335120 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1206 10:26:37.364979 335120 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1206 10:26:37.365468 335120 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1206 10:26:37.365895 335120 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1206 10:26:37.508346 335120 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1206 10:26:37.508459 335120 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1206 10:30:37.501028 335120 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000229763s
I1206 10:30:37.501047 335120 kubeadm.go:319]
I1206 10:30:37.501113 335120 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1206 10:30:37.501148 335120 kubeadm.go:319] - The kubelet is not running
I1206 10:30:37.501265 335120 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1206 10:30:37.501269 335120 kubeadm.go:319]
I1206 10:30:37.501405 335120 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1206 10:30:37.501470 335120 kubeadm.go:319] - 'systemctl status kubelet'
I1206 10:30:37.501503 335120 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1206 10:30:37.501506 335120 kubeadm.go:319]
I1206 10:30:37.507709 335120 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1206 10:30:37.508133 335120 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1206 10:30:37.508240 335120 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1206 10:30:37.508517 335120 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1206 10:30:37.508536 335120 kubeadm.go:319]
I1206 10:30:37.508658 335120 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1206 10:30:37.508675 335120 kubeadm.go:403] duration metric: took 8m7.046478202s to StartCluster
I1206 10:30:37.508727 335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1206 10:30:37.508793 335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1206 10:30:37.533349 335120 cri.go:89] found id: ""
I1206 10:30:37.533362 335120 logs.go:282] 0 containers: []
W1206 10:30:37.533369 335120 logs.go:284] No container was found matching "kube-apiserver"
I1206 10:30:37.533375 335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1206 10:30:37.533443 335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1206 10:30:37.558374 335120 cri.go:89] found id: ""
I1206 10:30:37.558388 335120 logs.go:282] 0 containers: []
W1206 10:30:37.558395 335120 logs.go:284] No container was found matching "etcd"
I1206 10:30:37.558400 335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1206 10:30:37.558465 335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1206 10:30:37.583298 335120 cri.go:89] found id: ""
I1206 10:30:37.583312 335120 logs.go:282] 0 containers: []
W1206 10:30:37.583320 335120 logs.go:284] No container was found matching "coredns"
I1206 10:30:37.583325 335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1206 10:30:37.583386 335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1206 10:30:37.611020 335120 cri.go:89] found id: ""
I1206 10:30:37.611035 335120 logs.go:282] 0 containers: []
W1206 10:30:37.611048 335120 logs.go:284] No container was found matching "kube-scheduler"
I1206 10:30:37.611053 335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1206 10:30:37.611112 335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1206 10:30:37.636763 335120 cri.go:89] found id: ""
I1206 10:30:37.636779 335120 logs.go:282] 0 containers: []
W1206 10:30:37.636786 335120 logs.go:284] No container was found matching "kube-proxy"
I1206 10:30:37.636792 335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1206 10:30:37.636857 335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1206 10:30:37.662412 335120 cri.go:89] found id: ""
I1206 10:30:37.662426 335120 logs.go:282] 0 containers: []
W1206 10:30:37.662432 335120 logs.go:284] No container was found matching "kube-controller-manager"
I1206 10:30:37.662438 335120 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1206 10:30:37.662496 335120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1206 10:30:37.687073 335120 cri.go:89] found id: ""
I1206 10:30:37.687087 335120 logs.go:282] 0 containers: []
W1206 10:30:37.687094 335120 logs.go:284] No container was found matching "kindnet"
I1206 10:30:37.687103 335120 logs.go:123] Gathering logs for describe nodes ...
I1206 10:30:37.687114 335120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1206 10:30:37.750659 335120 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1206 10:30:37.741999 4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 10:30:37.742587 4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 10:30:37.744100 4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 10:30:37.744708 4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 10:30:37.746267 4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1206 10:30:37.741999 4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 10:30:37.742587 4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 10:30:37.744100 4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 10:30:37.744708 4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 10:30:37.746267 4765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1206 10:30:37.750669 335120 logs.go:123] Gathering logs for containerd ...
I1206 10:30:37.750680 335120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1206 10:30:37.788574 335120 logs.go:123] Gathering logs for container status ...
I1206 10:30:37.788593 335120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1206 10:30:37.816354 335120 logs.go:123] Gathering logs for kubelet ...
I1206 10:30:37.816371 335120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1206 10:30:37.871994 335120 logs.go:123] Gathering logs for dmesg ...
I1206 10:30:37.872011 335120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
W1206 10:30:37.888590 335120 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000229763s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1206 10:30:37.888623 335120 out.go:285] *
W1206 10:30:37.888720 335120 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000229763s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1206 10:30:37.888743 335120 out.go:285] *
W1206 10:30:37.890983 335120 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1206 10:30:37.896233 335120 out.go:203]
W1206 10:30:37.899158 335120 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000229763s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1206 10:30:37.899201 335120 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1206 10:30:37.899237 335120 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1206 10:30:37.902331 335120 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061605244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061619291Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061657068Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061670713Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061680108Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061690398Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061700145Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061710517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061726238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.061755744Z" level=info msg="Connect containerd service"
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.062068765Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.062659680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.081695808Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.081762213Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.081797159Z" level=info msg="Start subscribing containerd event"
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.081846185Z" level=info msg="Start recovering state"
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125415217Z" level=info msg="Start event monitor"
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125471809Z" level=info msg="Start cni network conf syncer for default"
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125482516Z" level=info msg="Start streaming server"
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125491928Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125501979Z" level=info msg="runtime interface starting up..."
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125510176Z" level=info msg="starting plugins..."
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.125523280Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 06 10:22:28 functional-147194 systemd[1]: Started containerd.service - containerd container runtime.
Dec 06 10:22:28 functional-147194 containerd[769]: time="2025-12-06T10:22:28.127889354Z" level=info msg="containerd successfully booted in 0.087601s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1206 10:30:38.887398 4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 10:30:38.888182 4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 10:30:38.889804 4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 10:30:38.890482 4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1206 10:30:38.892164 4894 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[ +22.881815] overlayfs: idmapped layers are currently not supported
[Dec 6 09:25] overlayfs: idmapped layers are currently not supported
[Dec 6 09:26] overlayfs: idmapped layers are currently not supported
[Dec 6 09:27] overlayfs: idmapped layers are currently not supported
[Dec 6 09:28] overlayfs: idmapped layers are currently not supported
[Dec 6 09:29] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[Dec 6 09:30] overlayfs: idmapped layers are currently not supported
[Dec 6 09:32] overlayfs: idmapped layers are currently not supported
[Dec 6 09:38] overlayfs: idmapped layers are currently not supported
[Dec 6 09:40] overlayfs: idmapped layers are currently not supported
[Dec 6 09:41] overlayfs: idmapped layers are currently not supported
[ +34.598155] overlayfs: idmapped layers are currently not supported
[Dec 6 09:45] overlayfs: idmapped layers are currently not supported
[ +16.375624] overlayfs: idmapped layers are currently not supported
[Dec 6 09:46] hrtimer: interrupt took 36957444 ns
[ +9.802046] overlayfs: idmapped layers are currently not supported
[ +47.202757] overlayfs: idmapped layers are currently not supported
[Dec 6 09:47] overlayfs: idmapped layers are currently not supported
[Dec 6 09:48] overlayfs: idmapped layers are currently not supported
[ +28.128281] overlayfs: idmapped layers are currently not supported
[Dec 6 09:49] overlayfs: idmapped layers are currently not supported
[Dec 6 10:07] overlayfs: idmapped layers are currently not supported
[Dec 6 10:09] overlayfs: idmapped layers are currently not supported
[Dec 6 10:11] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
10:30:38 up 3:13, 0 user, load average: 0.31, 0.52, 1.02
Linux functional-147194 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 06 10:30:35 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 10:30:36 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 06 10:30:36 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 10:30:36 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 10:30:36 functional-147194 kubelet[4695]: E1206 10:30:36.620257 4695 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 06 10:30:36 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 10:30:36 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 10:30:37 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 06 10:30:37 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 10:30:37 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 10:30:37 functional-147194 kubelet[4700]: E1206 10:30:37.371505 4700 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 06 10:30:37 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 10:30:37 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 10:30:38 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 06 10:30:38 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 10:30:38 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 10:30:38 functional-147194 kubelet[4790]: E1206 10:30:38.132278 4790 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 06 10:30:38 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 10:30:38 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 06 10:30:38 functional-147194 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 06 10:30:38 functional-147194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 10:30:38 functional-147194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 06 10:30:38 functional-147194 kubelet[4888]: E1206 10:30:38.879415 4888 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 06 10:30:38 functional-147194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 10:30:38 functional-147194 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-147194 -n functional-147194: exit status 6 (339.556327ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1206 10:30:39.353353 340815 status.go:458] kubeconfig endpoint: get endpoint: "functional-147194" does not appear in /home/jenkins/minikube-integration/22047-294672/kubeconfig
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-147194" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (499.99s)