=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1205 06:15:45.660813 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:01.801022 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:18:29.506724 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.020046 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.026485 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.037975 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.059353 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.100718 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.182165 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.343673 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:14.665348 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:15.307400 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:16.588773 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:19.150120 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:24.271867 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:34.513504 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:19:54.995222 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:20:35.957831 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:21:57.880048 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-226068/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1205 06:23:01.797781 4192 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/addons-683092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m25.010620683s)
-- stdout --
* [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=21997
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-101526" primary control-plane node in "functional-101526" cluster
* Pulling base image v0.0.48-1764169655-21974 ...
* Found network options:
- HTTP_PROXY=localhost:40155
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:40155 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-101526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-101526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001210532s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001123814s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001123814s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-101526
helpers_test.go:243: (dbg) docker inspect functional-101526:
-- stdout --
[
{
"Id": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
"Created": "2025-12-05T06:15:09.334287249Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 42544,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-05T06:15:09.400200427Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
"ResolvConfPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hostname",
"HostsPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/hosts",
"LogPath": "/var/lib/docker/containers/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11/7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11-json.log",
"Name": "/functional-101526",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-101526:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-101526",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "7d26b0b609d526a52ebc2110208b14a589973b067ea3c8b06a0be4865bc01d11",
"LowerDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e-init/diff:/var/lib/docker/overlay2/cc8d8e60b0ee34e21689efe5bb35c216cbffb73f5ae72ed2b6f7287b02e1353e/diff",
"MergedDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/merged",
"UpperDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/diff",
"WorkDir": "/var/lib/docker/overlay2/c7a8e92700496fd109a9b95d6db38994f9d557379dc857b5be0d2da50c668b4e/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-101526",
"Source": "/var/lib/docker/volumes/functional-101526/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-101526",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-101526",
"name.minikube.sigs.k8s.io": "functional-101526",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "4ee213ab313f5c83923a4e4114dbfa3307a7cdfa5fee0b6149e38db18dcfee54",
"SandboxKey": "/var/run/docker/netns/4ee213ab313f",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32788"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32789"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32792"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32790"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32791"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-101526": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "12:00:24:89:4a:0b",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "b411c3419d523cfa2383a6364505420c41dc4f08803e3fb647c04bddad805963",
"EndpointID": "c3c20c72fd6341613366a3e9d4c8335d533a02172a277ec8bc5f46ed2a1e21f0",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-101526",
"7d26b0b609d5"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-101526 -n functional-101526: exit status 6 (334.137346ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1205 06:23:33.398923 48229 status.go:458] kubeconfig endpoint: get endpoint: "functional-101526" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p functional-101526 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ image │ functional-226068 image ls │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ ssh │ functional-226068 ssh sudo cat /etc/ssl/certs/41922.pem │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ image │ functional-226068 image load --daemon kicbase/echo-server:functional-226068 --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ ssh │ functional-226068 ssh sudo cat /usr/share/ca-certificates/41922.pem │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ ssh │ functional-226068 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ image │ functional-226068 image ls │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ image │ functional-226068 image load --daemon kicbase/echo-server:functional-226068 --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ update-context │ functional-226068 update-context --alsologtostderr -v=2 │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ update-context │ functional-226068 update-context --alsologtostderr -v=2 │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ update-context │ functional-226068 update-context --alsologtostderr -v=2 │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ image │ functional-226068 image save kicbase/echo-server:functional-226068 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ image │ functional-226068 image rm kicbase/echo-server:functional-226068 --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ image │ functional-226068 image ls │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ image │ functional-226068 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ image │ functional-226068 image ls │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:14 UTC │
│ image │ functional-226068 image save --daemon kicbase/echo-server:functional-226068 --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:14 UTC │ 05 Dec 25 06:15 UTC │
│ image │ functional-226068 image ls --format short --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
│ image │ functional-226068 image ls --format yaml --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
│ image │ functional-226068 image ls --format json --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
│ image │ functional-226068 image ls --format table --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
│ ssh │ functional-226068 ssh pgrep buildkitd │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ │
│ image │ functional-226068 image build -t localhost/my-image:functional-226068 testdata/build --alsologtostderr │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
│ image │ functional-226068 image ls │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
│ delete │ -p functional-226068 │ functional-226068 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ 05 Dec 25 06:15 UTC │
│ start │ -p functional-101526 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-101526 │ jenkins │ v1.37.0 │ 05 Dec 25 06:15 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/05 06:15:08
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.25.3 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1205 06:15:08.085680 42237 out.go:360] Setting OutFile to fd 1 ...
I1205 06:15:08.085787 42237 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:15:08.085791 42237 out.go:374] Setting ErrFile to fd 2...
I1205 06:15:08.085795 42237 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1205 06:15:08.086057 42237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2385/.minikube/bin
I1205 06:15:08.086476 42237 out.go:368] Setting JSON to false
I1205 06:15:08.087263 42237 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":3455,"bootTime":1764911853,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I1205 06:15:08.087323 42237 start.go:143] virtualization:
I1205 06:15:08.091834 42237 out.go:179] * [functional-101526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1205 06:15:08.095712 42237 out.go:179] - MINIKUBE_LOCATION=21997
I1205 06:15:08.095807 42237 notify.go:221] Checking for updates...
I1205 06:15:08.103071 42237 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1205 06:15:08.106468 42237 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21997-2385/kubeconfig
I1205 06:15:08.109678 42237 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2385/.minikube
I1205 06:15:08.112864 42237 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1205 06:15:08.116063 42237 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1205 06:15:08.119351 42237 driver.go:422] Setting default libvirt URI to qemu:///system
I1205 06:15:08.152457 42237 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1205 06:15:08.152570 42237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1205 06:15:08.207578 42237 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-05 06:15:08.198429499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1205 06:15:08.207681 42237 docker.go:319] overlay module found
I1205 06:15:08.210970 42237 out.go:179] * Using the docker driver based on user configuration
I1205 06:15:08.214116 42237 start.go:309] selected driver: docker
I1205 06:15:08.214126 42237 start.go:927] validating driver "docker" against <nil>
I1205 06:15:08.214138 42237 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1205 06:15:08.214872 42237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1205 06:15:08.279305 42237 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-05 06:15:08.270632196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1205 06:15:08.279443 42237 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1205 06:15:08.279663 42237 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1205 06:15:08.282757 42237 out.go:179] * Using Docker driver with root privileges
I1205 06:15:08.285777 42237 cni.go:84] Creating CNI manager for ""
I1205 06:15:08.285841 42237 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1205 06:15:08.285849 42237 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1205 06:15:08.285929 42237 start.go:353] cluster config:
{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 06:15:08.289062 42237 out.go:179] * Starting "functional-101526" primary control-plane node in "functional-101526" cluster
I1205 06:15:08.292061 42237 cache.go:134] Beginning downloading kic base image for docker with containerd
I1205 06:15:08.295079 42237 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
I1205 06:15:08.298052 42237 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1205 06:15:08.298128 42237 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
I1205 06:15:08.318088 42237 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
I1205 06:15:08.318098 42237 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
W1205 06:15:08.348375 42237 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
W1205 06:15:08.543328 42237 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
I1205 06:15:08.543549 42237 cache.go:107] acquiring lock: {Name:mke425626e4e20c30a9efd33961333fed634853e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 06:15:08.543639 42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1205 06:15:08.543648 42237 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 116.563µs
I1205 06:15:08.543660 42237 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1205 06:15:08.543676 42237 cache.go:107] acquiring lock: {Name:mkb561b80241cf4846f0e5a534178ed52a037789 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 06:15:08.543709 42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
I1205 06:15:08.543712 42237 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 44.177µs
I1205 06:15:08.543717 42237 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
I1205 06:15:08.543719 42237 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
I1205 06:15:08.543726 42237 cache.go:107] acquiring lock: {Name:mkf2a29bc196730bc7e89c04be57f920efca3c00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 06:15:08.543751 42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
I1205 06:15:08.543755 42237 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 30.81µs
I1205 06:15:08.543752 42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json: {Name:mkbb98d5b7a6e64e5ab9397a325db089a7d7b14b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 06:15:08.543761 42237 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
I1205 06:15:08.543770 42237 cache.go:107] acquiring lock: {Name:mkdc767b4f49014de725598bb6cb15d1297b54d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 06:15:08.543852 42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
I1205 06:15:08.543856 42237 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 87.09µs
I1205 06:15:08.543860 42237 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
I1205 06:15:08.543868 42237 cache.go:107] acquiring lock: {Name:mk68367bdf63d96d7cfadffa4b98c83f11089d1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 06:15:08.543893 42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
I1205 06:15:08.543897 42237 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.318µs
I1205 06:15:08.543901 42237 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
I1205 06:15:08.543912 42237 cache.go:243] Successfully downloaded all kic artifacts
I1205 06:15:08.543909 42237 cache.go:107] acquiring lock: {Name:mkb2778174a5f9700f2027473b10972bec57c9c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 06:15:08.543933 42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
I1205 06:15:08.543934 42237 start.go:360] acquireMachinesLock for functional-101526: {Name:mk88346f2ec5f086715fb8753ad2856f7779bd26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 06:15:08.543937 42237 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 29.136µs
I1205 06:15:08.543952 42237 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
I1205 06:15:08.543960 42237 cache.go:107] acquiring lock: {Name:mk6464893626430976de8f35edeed32de7fb67af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 06:15:08.543970 42237 start.go:364] duration metric: took 27.758µs to acquireMachinesLock for "functional-101526"
I1205 06:15:08.543984 42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
I1205 06:15:08.543988 42237 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 28.973µs
I1205 06:15:08.543992 42237 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
I1205 06:15:08.544000 42237 cache.go:107] acquiring lock: {Name:mk17df66985c077600cf0a039140c1c2865c1e83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 06:15:08.544024 42237 cache.go:115] /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
I1205 06:15:08.544028 42237 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.702µs
I1205 06:15:08.544033 42237 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
I1205 06:15:08.543985 42237 start.go:93] Provisioning new machine with config: &{Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1205 06:15:08.544040 42237 cache.go:87] Successfully saved all images to host disk.
I1205 06:15:08.544044 42237 start.go:125] createHost starting for "" (driver="docker")
I1205 06:15:08.549544 42237 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1205 06:15:08.549815 42237 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:40155 to docker env.
I1205 06:15:08.549884 42237 start.go:159] libmachine.API.Create for "functional-101526" (driver="docker")
I1205 06:15:08.549905 42237 client.go:173] LocalClient.Create starting
I1205 06:15:08.549977 42237 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem
I1205 06:15:08.550007 42237 main.go:143] libmachine: Decoding PEM data...
I1205 06:15:08.550020 42237 main.go:143] libmachine: Parsing certificate...
I1205 06:15:08.550086 42237 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem
I1205 06:15:08.550103 42237 main.go:143] libmachine: Decoding PEM data...
I1205 06:15:08.550113 42237 main.go:143] libmachine: Parsing certificate...
I1205 06:15:08.550472 42237 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1205 06:15:08.575886 42237 cli_runner.go:211] docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1205 06:15:08.575961 42237 network_create.go:284] running [docker network inspect functional-101526] to gather additional debugging logs...
I1205 06:15:08.575975 42237 cli_runner.go:164] Run: docker network inspect functional-101526
W1205 06:15:08.592841 42237 cli_runner.go:211] docker network inspect functional-101526 returned with exit code 1
I1205 06:15:08.592860 42237 network_create.go:287] error running [docker network inspect functional-101526]: docker network inspect functional-101526: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-101526 not found
I1205 06:15:08.592871 42237 network_create.go:289] output of [docker network inspect functional-101526]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-101526 not found
** /stderr **
I1205 06:15:08.592966 42237 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1205 06:15:08.609111 42237 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a16ec0}
I1205 06:15:08.609144 42237 network_create.go:124] attempt to create docker network functional-101526 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1205 06:15:08.609233 42237 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-101526 functional-101526
I1205 06:15:08.671175 42237 network_create.go:108] docker network functional-101526 192.168.49.0/24 created
I1205 06:15:08.671201 42237 kic.go:121] calculated static IP "192.168.49.2" for the "functional-101526" container
I1205 06:15:08.671276 42237 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1205 06:15:08.687249 42237 cli_runner.go:164] Run: docker volume create functional-101526 --label name.minikube.sigs.k8s.io=functional-101526 --label created_by.minikube.sigs.k8s.io=true
I1205 06:15:08.703836 42237 oci.go:103] Successfully created a docker volume functional-101526
I1205 06:15:08.703911 42237 cli_runner.go:164] Run: docker run --rm --name functional-101526-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-101526 --entrypoint /usr/bin/test -v functional-101526:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
I1205 06:15:09.262146 42237 oci.go:107] Successfully prepared a docker volume functional-101526
I1205 06:15:09.262216 42237 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
W1205 06:15:09.262350 42237 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1205 06:15:09.262464 42237 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1205 06:15:09.318657 42237 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-101526 --name functional-101526 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-101526 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-101526 --network functional-101526 --ip 192.168.49.2 --volume functional-101526:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
I1205 06:15:09.620406 42237 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Running}}
I1205 06:15:09.639845 42237 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
I1205 06:15:09.662996 42237 cli_runner.go:164] Run: docker exec functional-101526 stat /var/lib/dpkg/alternatives/iptables
I1205 06:15:09.711148 42237 oci.go:144] the created container "functional-101526" has a running status.
I1205 06:15:09.711167 42237 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa...
I1205 06:15:09.925808 42237 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1205 06:15:09.954961 42237 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
I1205 06:15:09.979514 42237 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1205 06:15:09.979525 42237 kic_runner.go:114] Args: [docker exec --privileged functional-101526 chown docker:docker /home/docker/.ssh/authorized_keys]
I1205 06:15:10.044015 42237 cli_runner.go:164] Run: docker container inspect functional-101526 --format={{.State.Status}}
I1205 06:15:10.071254 42237 machine.go:94] provisionDockerMachine start ...
I1205 06:15:10.071342 42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:15:10.098421 42237 main.go:143] libmachine: Using SSH client type: native
I1205 06:15:10.098737 42237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1205 06:15:10.098743 42237 main.go:143] libmachine: About to run SSH command:
hostname
I1205 06:15:10.099371 42237 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56338->127.0.0.1:32788: read: connection reset by peer
I1205 06:15:13.248903 42237 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
I1205 06:15:13.248917 42237 ubuntu.go:182] provisioning hostname "functional-101526"
I1205 06:15:13.248995 42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:15:13.266997 42237 main.go:143] libmachine: Using SSH client type: native
I1205 06:15:13.267294 42237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1205 06:15:13.267302 42237 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-101526 && echo "functional-101526" | sudo tee /etc/hostname
I1205 06:15:13.426392 42237 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-101526
I1205 06:15:13.426477 42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:15:13.444275 42237 main.go:143] libmachine: Using SSH client type: native
I1205 06:15:13.444583 42237 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1205 06:15:13.444597 42237 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-101526' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-101526/g' /etc/hosts;
else
echo '127.0.1.1 functional-101526' | sudo tee -a /etc/hosts;
fi
fi
I1205 06:15:13.597302 42237 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1205 06:15:13.597319 42237 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2385/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2385/.minikube}
I1205 06:15:13.597343 42237 ubuntu.go:190] setting up certificates
I1205 06:15:13.597351 42237 provision.go:84] configureAuth start
I1205 06:15:13.597427 42237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
I1205 06:15:13.615106 42237 provision.go:143] copyHostCerts
I1205 06:15:13.615160 42237 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem, removing ...
I1205 06:15:13.615167 42237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem
I1205 06:15:13.615248 42237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/ca.pem (1078 bytes)
I1205 06:15:13.615339 42237 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem, removing ...
I1205 06:15:13.615343 42237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem
I1205 06:15:13.615367 42237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/cert.pem (1123 bytes)
I1205 06:15:13.615416 42237 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem, removing ...
I1205 06:15:13.615420 42237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem
I1205 06:15:13.615440 42237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2385/.minikube/key.pem (1675 bytes)
I1205 06:15:13.615485 42237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem org=jenkins.functional-101526 san=[127.0.0.1 192.168.49.2 functional-101526 localhost minikube]
I1205 06:15:13.898799 42237 provision.go:177] copyRemoteCerts
I1205 06:15:13.898849 42237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1205 06:15:13.898888 42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:15:13.916044 42237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
I1205 06:15:14.017456 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1205 06:15:14.044736 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1205 06:15:14.063296 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1205 06:15:14.081456 42237 provision.go:87] duration metric: took 484.070012ms to configureAuth
I1205 06:15:14.081473 42237 ubuntu.go:206] setting minikube options for container-runtime
I1205 06:15:14.081675 42237 config.go:182] Loaded profile config "functional-101526": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1205 06:15:14.081682 42237 machine.go:97] duration metric: took 4.010417903s to provisionDockerMachine
I1205 06:15:14.081687 42237 client.go:176] duration metric: took 5.531777807s to LocalClient.Create
I1205 06:15:14.081703 42237 start.go:167] duration metric: took 5.531819415s to libmachine.API.Create "functional-101526"
I1205 06:15:14.081709 42237 start.go:293] postStartSetup for "functional-101526" (driver="docker")
I1205 06:15:14.081719 42237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1205 06:15:14.081771 42237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1205 06:15:14.081808 42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:15:14.099301 42237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
I1205 06:15:14.205397 42237 ssh_runner.go:195] Run: cat /etc/os-release
I1205 06:15:14.208761 42237 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1205 06:15:14.208780 42237 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1205 06:15:14.208789 42237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/addons for local assets ...
I1205 06:15:14.208843 42237 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2385/.minikube/files for local assets ...
I1205 06:15:14.208929 42237 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem -> 41922.pem in /etc/ssl/certs
I1205 06:15:14.209006 42237 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts -> hosts in /etc/test/nested/copy/4192
I1205 06:15:14.209050 42237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4192
I1205 06:15:14.216927 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /etc/ssl/certs/41922.pem (1708 bytes)
I1205 06:15:14.234804 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/test/nested/copy/4192/hosts --> /etc/test/nested/copy/4192/hosts (40 bytes)
I1205 06:15:14.252605 42237 start.go:296] duration metric: took 170.883342ms for postStartSetup
I1205 06:15:14.252986 42237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
I1205 06:15:14.270158 42237 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/config.json ...
I1205 06:15:14.270415 42237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1205 06:15:14.270453 42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:15:14.287138 42237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
I1205 06:15:14.387303 42237 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1205 06:15:14.391918 42237 start.go:128] duration metric: took 5.847860125s to createHost
I1205 06:15:14.391933 42237 start.go:83] releasing machines lock for "functional-101526", held for 5.84795753s
I1205 06:15:14.392007 42237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-101526
I1205 06:15:14.414425 42237 out.go:179] * Found network options:
I1205 06:15:14.417427 42237 out.go:179] - HTTP_PROXY=localhost:40155
W1205 06:15:14.420684 42237 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1205 06:15:14.423546 42237 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1205 06:15:14.426442 42237 ssh_runner.go:195] Run: cat /version.json
I1205 06:15:14.426492 42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:15:14.426516 42237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1205 06:15:14.426597 42237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-101526
I1205 06:15:14.446136 42237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
I1205 06:15:14.446730 42237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2385/.minikube/machines/functional-101526/id_rsa Username:docker}
I1205 06:15:14.646638 42237 ssh_runner.go:195] Run: systemctl --version
I1205 06:15:14.653380 42237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1205 06:15:14.657636 42237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1205 06:15:14.657703 42237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1205 06:15:14.684713 42237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1205 06:15:14.684740 42237 start.go:496] detecting cgroup driver to use...
I1205 06:15:14.684777 42237 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1205 06:15:14.684849 42237 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1205 06:15:14.700821 42237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1205 06:15:14.714348 42237 docker.go:218] disabling cri-docker service (if available) ...
I1205 06:15:14.714401 42237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1205 06:15:14.731838 42237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1205 06:15:14.750375 42237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1205 06:15:14.860625 42237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1205 06:15:14.984957 42237 docker.go:234] disabling docker service ...
I1205 06:15:14.985014 42237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1205 06:15:15.016693 42237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1205 06:15:15.034358 42237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1205 06:15:15.165942 42237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1205 06:15:15.277220 42237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1205 06:15:15.290747 42237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1205 06:15:15.304423 42237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1205 06:15:15.312935 42237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1205 06:15:15.321800 42237 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1205 06:15:15.321869 42237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1205 06:15:15.330598 42237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1205 06:15:15.339253 42237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1205 06:15:15.347734 42237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1205 06:15:15.356534 42237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1205 06:15:15.364637 42237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1205 06:15:15.373515 42237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1205 06:15:15.382173 42237 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1205 06:15:15.390876 42237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1205 06:15:15.398369 42237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1205 06:15:15.405846 42237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1205 06:15:15.514233 42237 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1205 06:15:15.607234 42237 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1205 06:15:15.607305 42237 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1205 06:15:15.611907 42237 start.go:564] Will wait 60s for crictl version
I1205 06:15:15.611962 42237 ssh_runner.go:195] Run: which crictl
I1205 06:15:15.615738 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1205 06:15:15.640408 42237 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.1.5
RuntimeApiVersion: v1
I1205 06:15:15.640476 42237 ssh_runner.go:195] Run: containerd --version
I1205 06:15:15.659995 42237 ssh_runner.go:195] Run: containerd --version
I1205 06:15:15.683286 42237 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
I1205 06:15:15.686223 42237 cli_runner.go:164] Run: docker network inspect functional-101526 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1205 06:15:15.702314 42237 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1205 06:15:15.706164 42237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1205 06:15:15.715794 42237 kubeadm.go:884] updating cluster {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1205 06:15:15.715892 42237 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1205 06:15:15.715947 42237 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 06:15:15.739837 42237 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
I1205 06:15:15.739850 42237 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
I1205 06:15:15.739897 42237 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1205 06:15:15.740099 42237 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1205 06:15:15.740182 42237 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1205 06:15:15.740255 42237 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1205 06:15:15.740331 42237 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1205 06:15:15.740397 42237 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
I1205 06:15:15.740462 42237 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
I1205 06:15:15.740529 42237 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
I1205 06:15:15.741980 42237 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
I1205 06:15:15.742327 42237 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
I1205 06:15:15.742631 42237 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1205 06:15:15.742763 42237 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
I1205 06:15:15.742862 42237 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1205 06:15:15.742968 42237 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1205 06:15:15.743077 42237 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1205 06:15:15.743180 42237 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1205 06:15:16.093307 42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
I1205 06:15:16.093379 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1205 06:15:16.114873 42237 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
I1205 06:15:16.114908 42237 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1205 06:15:16.114958 42237 ssh_runner.go:195] Run: which crictl
I1205 06:15:16.118477 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1205 06:15:16.142234 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1205 06:15:16.155059 42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
I1205 06:15:16.155122 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
I1205 06:15:16.172086 42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
I1205 06:15:16.172154 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
I1205 06:15:16.176601 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1205 06:15:16.186285 42237 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
I1205 06:15:16.186318 42237 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
I1205 06:15:16.186366 42237 ssh_runner.go:195] Run: which crictl
I1205 06:15:16.191422 42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
I1205 06:15:16.191486 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1205 06:15:16.208716 42237 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
I1205 06:15:16.208759 42237 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
I1205 06:15:16.208806 42237 ssh_runner.go:195] Run: which crictl
I1205 06:15:16.233883 42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
I1205 06:15:16.233969 42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
I1205 06:15:16.234066 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1205 06:15:16.234109 42237 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
I1205 06:15:16.234131 42237 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1205 06:15:16.234152 42237 ssh_runner.go:195] Run: which crictl
I1205 06:15:16.234204 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
I1205 06:15:16.242548 42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
I1205 06:15:16.242602 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1205 06:15:16.246168 42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
I1205 06:15:16.246224 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1205 06:15:16.254304 42237 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
I1205 06:15:16.254359 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
I1205 06:15:16.291393 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
I1205 06:15:16.291434 42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
I1205 06:15:16.291447 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
I1205 06:15:16.291482 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1205 06:15:16.291539 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1205 06:15:16.350306 42237 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
I1205 06:15:16.350342 42237 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1205 06:15:16.350388 42237 ssh_runner.go:195] Run: which crictl
I1205 06:15:16.357490 42237 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
I1205 06:15:16.357522 42237 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
I1205 06:15:16.357568 42237 ssh_runner.go:195] Run: which crictl
I1205 06:15:16.357613 42237 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
I1205 06:15:16.357625 42237 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1205 06:15:16.357645 42237 ssh_runner.go:195] Run: which crictl
I1205 06:15:16.383324 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1205 06:15:16.383392 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1205 06:15:16.383452 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
I1205 06:15:16.410169 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1205 06:15:16.410241 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1205 06:15:16.410326 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1205 06:15:16.517134 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1205 06:15:16.517249 42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
I1205 06:15:16.517314 42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
I1205 06:15:16.517350 42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
I1205 06:15:16.517403 42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
I1205 06:15:16.542192 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1205 06:15:16.542271 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1205 06:15:16.542351 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1205 06:15:16.574743 42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
I1205 06:15:16.574768 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
I1205 06:15:16.574821 42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
I1205 06:15:16.574891 42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
I1205 06:15:16.574931 42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
I1205 06:15:16.574939 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
I1205 06:15:16.657974 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1205 06:15:16.658069 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1205 06:15:16.658295 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1205 06:15:16.658352 42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
I1205 06:15:16.658366 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
I1205 06:15:16.748280 42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
I1205 06:15:16.748370 42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
I1205 06:15:16.748694 42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
I1205 06:15:16.748757 42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
I1205 06:15:16.752957 42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
I1205 06:15:16.753053 42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
I1205 06:15:16.783074 42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
I1205 06:15:16.783102 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
I1205 06:15:16.783142 42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
I1205 06:15:16.783150 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
I1205 06:15:16.786128 42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
I1205 06:15:16.786153 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
I1205 06:15:16.881438 42237 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
I1205 06:15:16.881497 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
W1205 06:15:16.885558 42237 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
I1205 06:15:16.885725 42237 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
I1205 06:15:16.885782 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
I1205 06:15:17.227948 42237 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
I1205 06:15:17.227979 42237 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1205 06:15:17.228030 42237 ssh_runner.go:195] Run: which crictl
I1205 06:15:17.228071 42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
I1205 06:15:17.228087 42237 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
I1205 06:15:17.228121 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
I1205 06:15:18.459841 42237 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.231698172s)
I1205 06:15:18.459860 42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
I1205 06:15:18.459883 42237 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
I1205 06:15:18.459932 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
I1205 06:15:18.459995 42237 ssh_runner.go:235] Completed: which crictl: (1.231958277s)
I1205 06:15:18.460022 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1205 06:15:19.389944 42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
I1205 06:15:19.389964 42237 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
I1205 06:15:19.390014 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
I1205 06:15:19.390084 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1205 06:15:20.381472 42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
I1205 06:15:20.381604 42237 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1205 06:15:20.381661 42237 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
I1205 06:15:20.381684 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
I1205 06:15:21.780949 42237 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.399243244s)
I1205 06:15:21.780965 42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
I1205 06:15:21.780984 42237 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
I1205 06:15:21.781030 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
I1205 06:15:21.781096 42237 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.399484034s)
I1205 06:15:21.781117 42237 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
I1205 06:15:21.781198 42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I1205 06:15:22.685465 42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
I1205 06:15:22.685488 42237 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
I1205 06:15:22.685544 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
I1205 06:15:22.685567 42237 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I1205 06:15:22.685594 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
I1205 06:15:23.750545 42237 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.064977577s)
I1205 06:15:23.750577 42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
I1205 06:15:23.750594 42237 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1205 06:15:23.750639 42237 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1205 06:15:24.105778 42237 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2385/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1205 06:15:24.105803 42237 cache_images.go:125] Successfully loaded all cached images
I1205 06:15:24.105808 42237 cache_images.go:94] duration metric: took 8.365946804s to LoadCachedImages
I1205 06:15:24.105826 42237 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1205 06:15:24.105938 42237 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-101526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1205 06:15:24.106000 42237 ssh_runner.go:195] Run: sudo crictl info
I1205 06:15:24.131457 42237 cni.go:84] Creating CNI manager for ""
I1205 06:15:24.131467 42237 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1205 06:15:24.131482 42237 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1205 06:15:24.131503 42237 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-101526 NodeName:functional-101526 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1205 06:15:24.131614 42237 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-101526"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1205 06:15:24.131686 42237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1205 06:15:24.139816 42237 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
Initiating transfer...
I1205 06:15:24.139870 42237 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
I1205 06:15:24.147770 42237 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
I1205 06:15:24.147852 42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
I1205 06:15:24.147929 42237 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
I1205 06:15:24.147960 42237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1205 06:15:24.148031 42237 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
I1205 06:15:24.148079 42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
I1205 06:15:24.154890 42237 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
I1205 06:15:24.154915 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
I1205 06:15:24.168615 42237 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
I1205 06:15:24.168638 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
I1205 06:15:24.168707 42237 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
I1205 06:15:24.190394 42237 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
I1205 06:15:24.190418 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
I1205 06:15:24.948025 42237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1205 06:15:24.958265 42237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1205 06:15:24.971429 42237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1205 06:15:24.985707 42237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1205 06:15:24.999853 42237 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1205 06:15:25.007685 42237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1205 06:15:25.023162 42237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1205 06:15:25.143062 42237 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1205 06:15:25.161549 42237 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526 for IP: 192.168.49.2
I1205 06:15:25.161560 42237 certs.go:195] generating shared ca certs ...
I1205 06:15:25.161585 42237 certs.go:227] acquiring lock for ca certs: {Name:mk980549de8aa45b9b57067459ea82f49bce0523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 06:15:25.161732 42237 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key
I1205 06:15:25.161777 42237 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key
I1205 06:15:25.161783 42237 certs.go:257] generating profile certs ...
I1205 06:15:25.161835 42237 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key
I1205 06:15:25.161844 42237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt with IP's: []
I1205 06:15:25.526060 42237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt ...
I1205 06:15:25.526076 42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.crt: {Name:mk0f62cdda76b04469b61f130355c66263b88984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 06:15:25.526273 42237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key ...
I1205 06:15:25.526279 42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/client.key: {Name:mkd962052fc981e118f4a3acb328540e925978f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 06:15:25.526386 42237 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a
I1205 06:15:25.526398 42237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt.b6aec90a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1205 06:15:25.807207 42237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt.b6aec90a ...
I1205 06:15:25.807222 42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt.b6aec90a: {Name:mkfd06686d862316da89c3d24bf271a94894046c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 06:15:25.807407 42237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a ...
I1205 06:15:25.807413 42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a: {Name:mkaece3f89e0518f39d77b16181dc1f1a3bf6684 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 06:15:25.807494 42237 certs.go:382] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt.b6aec90a -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt
I1205 06:15:25.807576 42237 certs.go:386] copying /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key.b6aec90a -> /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key
I1205 06:15:25.807630 42237 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key
I1205 06:15:25.807642 42237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt with IP's: []
I1205 06:15:26.044159 42237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt ...
I1205 06:15:26.044180 42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt: {Name:mk500bfabecdb269c991955b2e95c327e15b1277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 06:15:26.044388 42237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key ...
I1205 06:15:26.044397 42237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key: {Name:mk9a70f3d246740e46dcdee227203c2847fb514f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 06:15:26.044608 42237 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem (1338 bytes)
W1205 06:15:26.044652 42237 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192_empty.pem, impossibly tiny 0 bytes
I1205 06:15:26.044660 42237 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca-key.pem (1675 bytes)
I1205 06:15:26.044688 42237 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/ca.pem (1078 bytes)
I1205 06:15:26.044718 42237 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/cert.pem (1123 bytes)
I1205 06:15:26.044742 42237 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/certs/key.pem (1675 bytes)
I1205 06:15:26.044788 42237 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem (1708 bytes)
I1205 06:15:26.045440 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1205 06:15:26.065845 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1205 06:15:26.085106 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1205 06:15:26.105048 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1205 06:15:26.124533 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1205 06:15:26.142618 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1205 06:15:26.161778 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1205 06:15:26.180514 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/profiles/functional-101526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1205 06:15:26.200113 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/files/etc/ssl/certs/41922.pem --> /usr/share/ca-certificates/41922.pem (1708 bytes)
I1205 06:15:26.218401 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1205 06:15:26.236432 42237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2385/.minikube/certs/4192.pem --> /usr/share/ca-certificates/4192.pem (1338 bytes)
I1205 06:15:26.254626 42237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1205 06:15:26.267808 42237 ssh_runner.go:195] Run: openssl version
I1205 06:15:26.274286 42237 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4192.pem
I1205 06:15:26.282096 42237 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4192.pem /etc/ssl/certs/4192.pem
I1205 06:15:26.290014 42237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4192.pem
I1205 06:15:26.294054 42237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 5 06:15 /usr/share/ca-certificates/4192.pem
I1205 06:15:26.294112 42237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4192.pem
I1205 06:15:26.335572 42237 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1205 06:15:26.343548 42237 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4192.pem /etc/ssl/certs/51391683.0
I1205 06:15:26.351292 42237 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41922.pem
I1205 06:15:26.359075 42237 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41922.pem /etc/ssl/certs/41922.pem
I1205 06:15:26.367448 42237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41922.pem
I1205 06:15:26.371524 42237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 5 06:15 /usr/share/ca-certificates/41922.pem
I1205 06:15:26.371581 42237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41922.pem
I1205 06:15:26.414136 42237 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1205 06:15:26.421991 42237 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41922.pem /etc/ssl/certs/3ec20f2e.0
I1205 06:15:26.429677 42237 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1205 06:15:26.439161 42237 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1205 06:15:26.446961 42237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1205 06:15:26.451155 42237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 5 06:06 /usr/share/ca-certificates/minikubeCA.pem
I1205 06:15:26.451215 42237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1205 06:15:26.492773 42237 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1205 06:15:26.500523 42237 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1205 06:15:26.508204 42237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1205 06:15:26.511988 42237 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1205 06:15:26.512033 42237 kubeadm.go:401] StartCluster: {Name:functional-101526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-101526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 06:15:26.512101 42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1205 06:15:26.512161 42237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1205 06:15:26.539779 42237 cri.go:89] found id: ""
I1205 06:15:26.539848 42237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1205 06:15:26.547929 42237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1205 06:15:26.555769 42237 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1205 06:15:26.555822 42237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1205 06:15:26.563699 42237 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1205 06:15:26.563726 42237 kubeadm.go:158] found existing configuration files:
I1205 06:15:26.563777 42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1205 06:15:26.571872 42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1205 06:15:26.571928 42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1205 06:15:26.579588 42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1205 06:15:26.588283 42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1205 06:15:26.588337 42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1205 06:15:26.596119 42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1205 06:15:26.604901 42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1205 06:15:26.604958 42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1205 06:15:26.613041 42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1205 06:15:26.621583 42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1205 06:15:26.621636 42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1205 06:15:26.629359 42237 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1205 06:15:26.675413 42237 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1205 06:15:26.675612 42237 kubeadm.go:319] [preflight] Running pre-flight checks
I1205 06:15:26.742455 42237 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1205 06:15:26.742525 42237 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1205 06:15:26.742566 42237 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1205 06:15:26.742610 42237 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1205 06:15:26.742657 42237 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1205 06:15:26.742736 42237 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1205 06:15:26.742808 42237 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1205 06:15:26.742856 42237 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1205 06:15:26.742904 42237 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1205 06:15:26.742948 42237 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1205 06:15:26.742995 42237 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1205 06:15:26.743041 42237 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1205 06:15:26.811114 42237 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1205 06:15:26.811218 42237 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1205 06:15:26.811309 42237 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1205 06:15:26.816797 42237 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1205 06:15:26.825905 42237 out.go:252] - Generating certificates and keys ...
I1205 06:15:26.825997 42237 kubeadm.go:319] [certs] Using existing ca certificate authority
I1205 06:15:26.826062 42237 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1205 06:15:26.990701 42237 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1205 06:15:27.147598 42237 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1205 06:15:27.299024 42237 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1205 06:15:27.526717 42237 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1205 06:15:27.604897 42237 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1205 06:15:27.605064 42237 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-101526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1205 06:15:28.161195 42237 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1205 06:15:28.161503 42237 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-101526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1205 06:15:28.312221 42237 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1205 06:15:28.399597 42237 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1205 06:15:29.092420 42237 kubeadm.go:319] [certs] Generating "sa" key and public key
I1205 06:15:29.092663 42237 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1205 06:15:29.175227 42237 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1205 06:15:29.318346 42237 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1205 06:15:29.442442 42237 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1205 06:15:29.761424 42237 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1205 06:15:30.125354 42237 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1205 06:15:30.126072 42237 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1205 06:15:30.129679 42237 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1205 06:15:30.135534 42237 out.go:252] - Booting up control plane ...
I1205 06:15:30.135640 42237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1205 06:15:30.136530 42237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1205 06:15:30.137823 42237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1205 06:15:30.163133 42237 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1205 06:15:30.163236 42237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1205 06:15:30.173579 42237 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1205 06:15:30.173672 42237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1205 06:15:30.173711 42237 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1205 06:15:30.320928 42237 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1205 06:15:30.321041 42237 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1205 06:19:30.322132 42237 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001210532s
I1205 06:19:30.322154 42237 kubeadm.go:319]
I1205 06:19:30.322250 42237 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1205 06:19:30.322446 42237 kubeadm.go:319] - The kubelet is not running
I1205 06:19:30.322625 42237 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1205 06:19:30.322633 42237 kubeadm.go:319]
I1205 06:19:30.322813 42237 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1205 06:19:30.323107 42237 kubeadm.go:319] - 'systemctl status kubelet'
I1205 06:19:30.323160 42237 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1205 06:19:30.323164 42237 kubeadm.go:319]
I1205 06:19:30.327550 42237 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1205 06:19:30.328086 42237 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1205 06:19:30.328227 42237 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1205 06:19:30.328498 42237 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1205 06:19:30.328502 42237 kubeadm.go:319]
I1205 06:19:30.328642 42237 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1205 06:19:30.328693 42237 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-101526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-101526 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001210532s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1205 06:19:30.328777 42237 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1205 06:19:30.753784 42237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1205 06:19:30.766951 42237 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1205 06:19:30.767004 42237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1205 06:19:30.774736 42237 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1205 06:19:30.774745 42237 kubeadm.go:158] found existing configuration files:
I1205 06:19:30.774794 42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1205 06:19:30.782488 42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1205 06:19:30.782552 42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1205 06:19:30.790443 42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1205 06:19:30.798490 42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1205 06:19:30.798547 42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1205 06:19:30.806344 42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1205 06:19:30.814414 42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1205 06:19:30.814478 42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1205 06:19:30.822387 42237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1205 06:19:30.830621 42237 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1205 06:19:30.830675 42237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1205 06:19:30.838428 42237 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1205 06:19:30.945972 42237 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1205 06:19:30.946391 42237 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1205 06:19:31.016022 42237 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1205 06:23:32.632726 42237 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1205 06:23:32.632745 42237 kubeadm.go:319]
I1205 06:23:32.632827 42237 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1205 06:23:32.636565 42237 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1205 06:23:32.636616 42237 kubeadm.go:319] [preflight] Running pre-flight checks
I1205 06:23:32.636706 42237 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1205 06:23:32.636760 42237 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1205 06:23:32.636795 42237 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1205 06:23:32.636839 42237 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1205 06:23:32.636887 42237 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1205 06:23:32.636933 42237 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1205 06:23:32.636980 42237 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1205 06:23:32.637079 42237 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1205 06:23:32.637127 42237 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1205 06:23:32.637195 42237 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1205 06:23:32.637242 42237 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1205 06:23:32.637287 42237 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1205 06:23:32.637359 42237 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1205 06:23:32.637452 42237 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1205 06:23:32.637541 42237 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1205 06:23:32.637603 42237 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1205 06:23:32.639016 42237 out.go:252] - Generating certificates and keys ...
I1205 06:23:32.639097 42237 kubeadm.go:319] [certs] Using existing ca certificate authority
I1205 06:23:32.639155 42237 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1205 06:23:32.639226 42237 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1205 06:23:32.639282 42237 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1205 06:23:32.639346 42237 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1205 06:23:32.639396 42237 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1205 06:23:32.639455 42237 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1205 06:23:32.639512 42237 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1205 06:23:32.639581 42237 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1205 06:23:32.639648 42237 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1205 06:23:32.639683 42237 kubeadm.go:319] [certs] Using the existing "sa" key
I1205 06:23:32.639734 42237 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1205 06:23:32.639781 42237 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1205 06:23:32.639833 42237 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1205 06:23:32.639882 42237 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1205 06:23:32.639942 42237 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1205 06:23:32.639992 42237 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1205 06:23:32.640072 42237 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1205 06:23:32.640134 42237 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1205 06:23:32.641540 42237 out.go:252] - Booting up control plane ...
I1205 06:23:32.641651 42237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1205 06:23:32.641738 42237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1205 06:23:32.641809 42237 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1205 06:23:32.641926 42237 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1205 06:23:32.642025 42237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1205 06:23:32.642137 42237 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1205 06:23:32.642226 42237 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1205 06:23:32.642267 42237 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1205 06:23:32.642407 42237 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1205 06:23:32.642518 42237 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1205 06:23:32.642587 42237 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001123814s
I1205 06:23:32.642590 42237 kubeadm.go:319]
I1205 06:23:32.642649 42237 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1205 06:23:32.642682 42237 kubeadm.go:319] - The kubelet is not running
I1205 06:23:32.642793 42237 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1205 06:23:32.642796 42237 kubeadm.go:319]
I1205 06:23:32.642907 42237 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1205 06:23:32.642941 42237 kubeadm.go:319] - 'systemctl status kubelet'
I1205 06:23:32.642973 42237 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1205 06:23:32.643027 42237 kubeadm.go:403] duration metric: took 8m6.130999032s to StartCluster
I1205 06:23:32.643055 42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1205 06:23:32.643116 42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1205 06:23:32.643295 42237 kubeadm.go:319]
I1205 06:23:32.669477 42237 cri.go:89] found id: ""
I1205 06:23:32.669491 42237 logs.go:282] 0 containers: []
W1205 06:23:32.669498 42237 logs.go:284] No container was found matching "kube-apiserver"
I1205 06:23:32.669503 42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1205 06:23:32.669562 42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1205 06:23:32.695194 42237 cri.go:89] found id: ""
I1205 06:23:32.695208 42237 logs.go:282] 0 containers: []
W1205 06:23:32.695215 42237 logs.go:284] No container was found matching "etcd"
I1205 06:23:32.695220 42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1205 06:23:32.695276 42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1205 06:23:32.719538 42237 cri.go:89] found id: ""
I1205 06:23:32.719556 42237 logs.go:282] 0 containers: []
W1205 06:23:32.719563 42237 logs.go:284] No container was found matching "coredns"
I1205 06:23:32.719569 42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1205 06:23:32.719622 42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1205 06:23:32.748887 42237 cri.go:89] found id: ""
I1205 06:23:32.748900 42237 logs.go:282] 0 containers: []
W1205 06:23:32.748907 42237 logs.go:284] No container was found matching "kube-scheduler"
I1205 06:23:32.748913 42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1205 06:23:32.748972 42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1205 06:23:32.777759 42237 cri.go:89] found id: ""
I1205 06:23:32.777772 42237 logs.go:282] 0 containers: []
W1205 06:23:32.777779 42237 logs.go:284] No container was found matching "kube-proxy"
I1205 06:23:32.777784 42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1205 06:23:32.777841 42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1205 06:23:32.802728 42237 cri.go:89] found id: ""
I1205 06:23:32.802742 42237 logs.go:282] 0 containers: []
W1205 06:23:32.802749 42237 logs.go:284] No container was found matching "kube-controller-manager"
I1205 06:23:32.802762 42237 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1205 06:23:32.802818 42237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1205 06:23:32.827595 42237 cri.go:89] found id: ""
I1205 06:23:32.827609 42237 logs.go:282] 0 containers: []
W1205 06:23:32.827616 42237 logs.go:284] No container was found matching "kindnet"
I1205 06:23:32.827624 42237 logs.go:123] Gathering logs for container status ...
I1205 06:23:32.827635 42237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1205 06:23:32.855529 42237 logs.go:123] Gathering logs for kubelet ...
I1205 06:23:32.855546 42237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1205 06:23:32.913895 42237 logs.go:123] Gathering logs for dmesg ...
I1205 06:23:32.913915 42237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1205 06:23:32.924927 42237 logs.go:123] Gathering logs for describe nodes ...
I1205 06:23:32.924943 42237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1205 06:23:32.990913 42237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1205 06:23:32.983297 5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1205 06:23:32.983920 5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1205 06:23:32.985604 5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1205 06:23:32.986098 5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1205 06:23:32.987539 5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1205 06:23:32.983297 5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1205 06:23:32.983920 5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1205 06:23:32.985604 5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1205 06:23:32.986098 5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1205 06:23:32.987539 5375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1205 06:23:32.990923 42237 logs.go:123] Gathering logs for containerd ...
I1205 06:23:32.990933 42237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
W1205 06:23:33.033093 42237 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001123814s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1205 06:23:33.033137 42237 out.go:285] *
W1205 06:23:33.033222 42237 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001123814s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1205 06:23:33.033290 42237 out.go:285] *
W1205 06:23:33.035445 42237 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1205 06:23:33.038752 42237 out.go:203]
W1205 06:23:33.040339 42237 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001123814s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1205 06:23:33.040381 42237 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1205 06:23:33.040401 42237 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1205 06:23:33.041722 42237 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 05 06:15:18 functional-101526 containerd[765]: time="2025-12-05T06:15:18.466794282Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 05 06:15:19 functional-101526 containerd[765]: time="2025-12-05T06:15:19.380868960Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
Dec 05 06:15:19 functional-101526 containerd[765]: time="2025-12-05T06:15:19.382956520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
Dec 05 06:15:19 functional-101526 containerd[765]: time="2025-12-05T06:15:19.391111155Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 05 06:15:19 functional-101526 containerd[765]: time="2025-12-05T06:15:19.391945994Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 05 06:15:20 functional-101526 containerd[765]: time="2025-12-05T06:15:20.372606956Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
Dec 05 06:15:20 functional-101526 containerd[765]: time="2025-12-05T06:15:20.374866858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
Dec 05 06:15:20 functional-101526 containerd[765]: time="2025-12-05T06:15:20.390438399Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 05 06:15:20 functional-101526 containerd[765]: time="2025-12-05T06:15:20.391926388Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 05 06:15:21 functional-101526 containerd[765]: time="2025-12-05T06:15:21.772952290Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
Dec 05 06:15:21 functional-101526 containerd[765]: time="2025-12-05T06:15:21.775274305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
Dec 05 06:15:21 functional-101526 containerd[765]: time="2025-12-05T06:15:21.789261652Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 05 06:15:21 functional-101526 containerd[765]: time="2025-12-05T06:15:21.801952108Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 05 06:15:22 functional-101526 containerd[765]: time="2025-12-05T06:15:22.675553467Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
Dec 05 06:15:22 functional-101526 containerd[765]: time="2025-12-05T06:15:22.677773820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
Dec 05 06:15:22 functional-101526 containerd[765]: time="2025-12-05T06:15:22.692521911Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 05 06:15:22 functional-101526 containerd[765]: time="2025-12-05T06:15:22.693377082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 05 06:15:23 functional-101526 containerd[765]: time="2025-12-05T06:15:23.739345342Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
Dec 05 06:15:23 functional-101526 containerd[765]: time="2025-12-05T06:15:23.741500504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
Dec 05 06:15:23 functional-101526 containerd[765]: time="2025-12-05T06:15:23.764578635Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 05 06:15:23 functional-101526 containerd[765]: time="2025-12-05T06:15:23.765671888Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 05 06:15:24 functional-101526 containerd[765]: time="2025-12-05T06:15:24.096994562Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
Dec 05 06:15:24 functional-101526 containerd[765]: time="2025-12-05T06:15:24.099843811Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
Dec 05 06:15:24 functional-101526 containerd[765]: time="2025-12-05T06:15:24.107868214Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 05 06:15:24 functional-101526 containerd[765]: time="2025-12-05T06:15:24.108162846Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1205 06:23:33.992733 5482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1205 06:23:33.993428 5482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1205 06:23:33.994993 5482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1205 06:23:33.995516 5482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1205 06:23:33.997033 5482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec 5 05:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014847] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.514630] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.032112] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.698033] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.780023] kauditd_printk_skb: 36 callbacks suppressed
==> kernel <==
06:23:34 up 1:06, 0 user, load average: 0.08, 0.49, 0.68
Linux functional-101526 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 05 06:23:30 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 05 06:23:31 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 05 06:23:31 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 05 06:23:31 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 05 06:23:31 functional-101526 kubelet[5288]: E1205 06:23:31.140595 5288 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 05 06:23:31 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 06:23:31 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 05 06:23:31 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 05 06:23:31 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 05 06:23:31 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 05 06:23:31 functional-101526 kubelet[5293]: E1205 06:23:31.885579 5293 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 05 06:23:31 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 06:23:31 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 05 06:23:32 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 05 06:23:32 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 05 06:23:32 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 05 06:23:32 functional-101526 kubelet[5299]: E1205 06:23:32.650700 5299 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 05 06:23:32 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 06:23:32 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 05 06:23:33 functional-101526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 05 06:23:33 functional-101526 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 05 06:23:33 functional-101526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 05 06:23:33 functional-101526 kubelet[5394]: E1205 06:23:33.369594 5394 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 05 06:23:33 functional-101526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 05 06:23:33 functional-101526 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-101526 -n functional-101526: exit status 6 (358.550497ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1205 06:23:34.497796 48449 status.go:458] kubeconfig endpoint: get endpoint: "functional-101526" does not appear in /home/jenkins/minikube-integration/21997-2385/kubeconfig
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-101526" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (506.47s)