=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1212 19:42:22.896919 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:42:50.610272 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:51.906462 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:51.912848 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:51.924321 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:51.945726 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:51.987203 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:52.068736 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:52.230235 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:52.551942 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:53.194114 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:54.475719 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:44:57.037136 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:02.159132 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:12.401034 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:45:32.882384 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:46:13.844351 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:22.896844 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/addons-593103/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:35.768013 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-008271/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.444277766s)
-- stdout --
* [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22112
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-384006" primary control-plane node in "functional-384006" cluster
* Pulling base image v0.0.48-1765505794-22112 ...
* Found network options:
- HTTP_PROXY=localhost:46339
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:46339 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-384006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-384006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000209188s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001214076s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001214076s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-384006
helpers_test.go:244: (dbg) docker inspect functional-384006:
-- stdout --
[
{
"Id": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
"Created": "2025-12-12T19:40:49.413785329Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 43086,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-12T19:40:49.485581335Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
"ResolvConfPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hostname",
"HostsPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/hosts",
"LogPath": "/var/lib/docker/containers/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17/b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17-json.log",
"Name": "/functional-384006",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-384006:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-384006",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "b1a98cbc46983da503d17ae9e5cfce64cc73f7c5d413eaf013b72b42f05f9a17",
"LowerDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296-init/diff:/var/lib/docker/overlay2/e045d4bf347c64f3cbf42a97f0cb5729ed5699bda73ca5751717f555f7c01df1/diff",
"MergedDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/merged",
"UpperDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/diff",
"WorkDir": "/var/lib/docker/overlay2/917d585fbc7b2a2e07b0fa5b92134ce8bc1ce6f4ce3cfbbbb8ea01309db08296/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "functional-384006",
"Source": "/var/lib/docker/volumes/functional-384006/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "functional-384006",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-384006",
"name.minikube.sigs.k8s.io": "functional-384006",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "36cb954f7d4f6bf90d415ba6b309740af43913afba20f6d7d93ec3c7d90d4de5",
"SandboxKey": "/var/run/docker/netns/36cb954f7d4f",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32788"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32789"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32792"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32790"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32791"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-384006": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "72:63:42:b7:50:34",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "ef3790c143c0333ab10341d6a40177cef53914dddf926d048a811221f7b4d25e",
"EndpointID": "d9f77e46696253f9c3ce8a0a36703d7a03738ae348c39276dbe99fc3079fb5ee",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-384006",
"b1a98cbc4698"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-384006 -n functional-384006: exit status 6 (325.90184ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1212 19:49:05.040907 48147 status.go:458] kubeconfig endpoint: get endpoint: "functional-384006" does not appear in /home/jenkins/minikube-integration/22112-2315/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-384006 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-008271 ssh sudo cat /etc/ssl/certs/41202.pem │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ ssh │ functional-008271 ssh sudo cat /usr/share/ca-certificates/41202.pem │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ ssh │ functional-008271 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image ls │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image load --daemon kicbase/echo-server:functional-008271 --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ update-context │ functional-008271 update-context --alsologtostderr -v=2 │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image ls │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ update-context │ functional-008271 update-context --alsologtostderr -v=2 │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image save kicbase/echo-server:functional-008271 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ update-context │ functional-008271 update-context --alsologtostderr -v=2 │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image rm kicbase/echo-server:functional-008271 --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image ls │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image ls │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image save --daemon kicbase/echo-server:functional-008271 --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image ls --format short --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image ls --format yaml --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image ls --format json --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image ls --format table --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ ssh │ functional-008271 ssh pgrep buildkitd │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ │
│ image │ functional-008271 image build -t localhost/my-image:functional-008271 testdata/build --alsologtostderr │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ image │ functional-008271 image ls │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ delete │ -p functional-008271 │ functional-008271 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ 12 Dec 25 19:40 UTC │
│ start │ -p functional-384006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-384006 │ jenkins │ v1.37.0 │ 12 Dec 25 19:40 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/12 19:40:44
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1212 19:40:44.310161 42701 out.go:360] Setting OutFile to fd 1 ...
I1212 19:40:44.310273 42701 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:44.310277 42701 out.go:374] Setting ErrFile to fd 2...
I1212 19:40:44.310281 42701 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:40:44.310628 42701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-2315/.minikube/bin
I1212 19:40:44.311115 42701 out.go:368] Setting JSON to false
I1212 19:40:44.312242 42701 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1394,"bootTime":1765567051,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1212 19:40:44.312304 42701 start.go:143] virtualization:
I1212 19:40:44.316422 42701 out.go:179] * [functional-384006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1212 19:40:44.321012 42701 out.go:179] - MINIKUBE_LOCATION=22112
I1212 19:40:44.321108 42701 notify.go:221] Checking for updates...
I1212 19:40:44.327985 42701 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1212 19:40:44.331143 42701 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22112-2315/kubeconfig
I1212 19:40:44.334255 42701 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-2315/.minikube
I1212 19:40:44.337345 42701 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1212 19:40:44.340453 42701 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1212 19:40:44.343550 42701 driver.go:422] Setting default libvirt URI to qemu:///system
I1212 19:40:44.378127 42701 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1212 19:40:44.378238 42701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 19:40:44.442593 42701 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 19:40:44.433207397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 19:40:44.442683 42701 docker.go:319] overlay module found
I1212 19:40:44.445954 42701 out.go:179] * Using the docker driver based on user configuration
I1212 19:40:44.448927 42701 start.go:309] selected driver: docker
I1212 19:40:44.448934 42701 start.go:927] validating driver "docker" against <nil>
I1212 19:40:44.448946 42701 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1212 19:40:44.449638 42701 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 19:40:44.504328 42701 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 19:40:44.494804233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 19:40:44.504490 42701 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1212 19:40:44.504702 42701 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1212 19:40:44.507653 42701 out.go:179] * Using Docker driver with root privileges
I1212 19:40:44.510526 42701 cni.go:84] Creating CNI manager for ""
I1212 19:40:44.510593 42701 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1212 19:40:44.510599 42701 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1212 19:40:44.510668 42701 start.go:353] cluster config:
{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1212 19:40:44.515768 42701 out.go:179] * Starting "functional-384006" primary control-plane node in "functional-384006" cluster
I1212 19:40:44.518614 42701 cache.go:134] Beginning downloading kic base image for docker with containerd
I1212 19:40:44.521608 42701 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
I1212 19:40:44.524484 42701 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1212 19:40:44.524520 42701 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
I1212 19:40:44.524528 42701 cache.go:65] Caching tarball of preloaded images
I1212 19:40:44.524562 42701 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
I1212 19:40:44.524615 42701 preload.go:238] Found /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1212 19:40:44.524624 42701 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
I1212 19:40:44.524954 42701 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
I1212 19:40:44.524974 42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json: {Name:mkc67cd233583856f1f5fc489517f02e18634395 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:40:44.545471 42701 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
I1212 19:40:44.545481 42701 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
I1212 19:40:44.545499 42701 cache.go:243] Successfully downloaded all kic artifacts
I1212 19:40:44.545533 42701 start.go:360] acquireMachinesLock for functional-384006: {Name:mk3334c8fedf7efc32fb4628474f2cba3c1d9181 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 19:40:44.545642 42701 start.go:364] duration metric: took 96.063µs to acquireMachinesLock for "functional-384006"
I1212 19:40:44.545665 42701 start.go:93] Provisioning new machine with config: &{Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1212 19:40:44.545726 42701 start.go:125] createHost starting for "" (driver="docker")
I1212 19:40:44.549281 42701 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1212 19:40:44.549548 42701 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:46339 to docker env.
I1212 19:40:44.549576 42701 start.go:159] libmachine.API.Create for "functional-384006" (driver="docker")
I1212 19:40:44.549596 42701 client.go:173] LocalClient.Create starting
I1212 19:40:44.549656 42701 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem
I1212 19:40:44.549687 42701 main.go:143] libmachine: Decoding PEM data...
I1212 19:40:44.549700 42701 main.go:143] libmachine: Parsing certificate...
I1212 19:40:44.549750 42701 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem
I1212 19:40:44.549764 42701 main.go:143] libmachine: Decoding PEM data...
I1212 19:40:44.549774 42701 main.go:143] libmachine: Parsing certificate...
I1212 19:40:44.550119 42701 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1212 19:40:44.565682 42701 cli_runner.go:211] docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1212 19:40:44.565762 42701 network_create.go:284] running [docker network inspect functional-384006] to gather additional debugging logs...
I1212 19:40:44.565776 42701 cli_runner.go:164] Run: docker network inspect functional-384006
W1212 19:40:44.579634 42701 cli_runner.go:211] docker network inspect functional-384006 returned with exit code 1
I1212 19:40:44.579661 42701 network_create.go:287] error running [docker network inspect functional-384006]: docker network inspect functional-384006: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-384006 not found
I1212 19:40:44.579672 42701 network_create.go:289] output of [docker network inspect functional-384006]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-384006 not found
** /stderr **
I1212 19:40:44.579758 42701 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 19:40:44.595724 42701 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400193d020}
I1212 19:40:44.595753 42701 network_create.go:124] attempt to create docker network functional-384006 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1212 19:40:44.595810 42701 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-384006 functional-384006
I1212 19:40:44.648762 42701 network_create.go:108] docker network functional-384006 192.168.49.0/24 created
I1212 19:40:44.648784 42701 kic.go:121] calculated static IP "192.168.49.2" for the "functional-384006" container
I1212 19:40:44.648868 42701 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1212 19:40:44.667189 42701 cli_runner.go:164] Run: docker volume create functional-384006 --label name.minikube.sigs.k8s.io=functional-384006 --label created_by.minikube.sigs.k8s.io=true
I1212 19:40:44.686404 42701 oci.go:103] Successfully created a docker volume functional-384006
I1212 19:40:44.686484 42701 cli_runner.go:164] Run: docker run --rm --name functional-384006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-384006 --entrypoint /usr/bin/test -v functional-384006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
I1212 19:40:45.308553 42701 oci.go:107] Successfully prepared a docker volume functional-384006
I1212 19:40:45.308616 42701 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1212 19:40:45.308627 42701 kic.go:194] Starting extracting preloaded images to volume ...
I1212 19:40:45.308706 42701 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-384006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
I1212 19:40:49.339364 42701 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-384006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (4.030626381s)
I1212 19:40:49.339385 42701 kic.go:203] duration metric: took 4.03075446s to extract preloaded images to volume ...
W1212 19:40:49.339535 42701 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1212 19:40:49.339629 42701 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1212 19:40:49.399382 42701 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-384006 --name functional-384006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-384006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-384006 --network functional-384006 --ip 192.168.49.2 --volume functional-384006:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
I1212 19:40:49.691529 42701 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Running}}
I1212 19:40:49.713597 42701 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
I1212 19:40:49.741772 42701 cli_runner.go:164] Run: docker exec functional-384006 stat /var/lib/dpkg/alternatives/iptables
I1212 19:40:49.785019 42701 oci.go:144] the created container "functional-384006" has a running status.
I1212 19:40:49.785038 42701 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa...
I1212 19:40:50.030738 42701 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1212 19:40:50.071006 42701 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
I1212 19:40:50.092909 42701 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1212 19:40:50.092920 42701 kic_runner.go:114] Args: [docker exec --privileged functional-384006 chown docker:docker /home/docker/.ssh/authorized_keys]
I1212 19:40:50.145932 42701 cli_runner.go:164] Run: docker container inspect functional-384006 --format={{.State.Status}}
I1212 19:40:50.179366 42701 machine.go:94] provisionDockerMachine start ...
I1212 19:40:50.179482 42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 19:40:50.201552 42701 main.go:143] libmachine: Using SSH client type: native
I1212 19:40:50.201876 42701 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1212 19:40:50.201884 42701 main.go:143] libmachine: About to run SSH command:
hostname
I1212 19:40:50.202609 42701 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1212 19:40:53.359592 42701 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
I1212 19:40:53.359605 42701 ubuntu.go:182] provisioning hostname "functional-384006"
I1212 19:40:53.359667 42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 19:40:53.378349 42701 main.go:143] libmachine: Using SSH client type: native
I1212 19:40:53.378706 42701 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1212 19:40:53.378719 42701 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-384006 && echo "functional-384006" | sudo tee /etc/hostname
I1212 19:40:53.541260 42701 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-384006
I1212 19:40:53.541326 42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 19:40:53.561542 42701 main.go:143] libmachine: Using SSH client type: native
I1212 19:40:53.561847 42701 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1212 19:40:53.561860 42701 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-384006' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-384006/g' /etc/hosts;
else
echo '127.0.1.1 functional-384006' | sudo tee -a /etc/hosts;
fi
fi
I1212 19:40:53.712504 42701 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1212 19:40:53.712518 42701 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-2315/.minikube}
I1212 19:40:53.712537 42701 ubuntu.go:190] setting up certificates
I1212 19:40:53.712546 42701 provision.go:84] configureAuth start
I1212 19:40:53.712603 42701 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
I1212 19:40:53.730838 42701 provision.go:143] copyHostCerts
I1212 19:40:53.730905 42701 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem, removing ...
I1212 19:40:53.730912 42701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem
I1212 19:40:53.730987 42701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/ca.pem (1078 bytes)
I1212 19:40:53.731074 42701 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem, removing ...
I1212 19:40:53.731077 42701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem
I1212 19:40:53.731107 42701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/cert.pem (1123 bytes)
I1212 19:40:53.731188 42701 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem, removing ...
I1212 19:40:53.731191 42701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem
I1212 19:40:53.731219 42701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-2315/.minikube/key.pem (1679 bytes)
I1212 19:40:53.731261 42701 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem org=jenkins.functional-384006 san=[127.0.0.1 192.168.49.2 functional-384006 localhost minikube]
I1212 19:40:53.985720 42701 provision.go:177] copyRemoteCerts
I1212 19:40:53.985776 42701 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1212 19:40:53.985816 42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 19:40:54.002922 42701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
I1212 19:40:54.115669 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1212 19:40:54.133018 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1212 19:40:54.150638 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1212 19:40:54.167630 42701 provision.go:87] duration metric: took 455.071631ms to configureAuth
I1212 19:40:54.167647 42701 ubuntu.go:206] setting minikube options for container-runtime
I1212 19:40:54.167826 42701 config.go:182] Loaded profile config "functional-384006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 19:40:54.167832 42701 machine.go:97] duration metric: took 3.98845611s to provisionDockerMachine
I1212 19:40:54.167926 42701 client.go:176] duration metric: took 9.61832436s to LocalClient.Create
I1212 19:40:54.167942 42701 start.go:167] duration metric: took 9.618369487s to libmachine.API.Create "functional-384006"
I1212 19:40:54.167948 42701 start.go:293] postStartSetup for "functional-384006" (driver="docker")
I1212 19:40:54.167957 42701 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1212 19:40:54.168014 42701 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1212 19:40:54.168049 42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 19:40:54.184492 42701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
I1212 19:40:54.292188 42701 ssh_runner.go:195] Run: cat /etc/os-release
I1212 19:40:54.295686 42701 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1212 19:40:54.295703 42701 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1212 19:40:54.295713 42701 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/addons for local assets ...
I1212 19:40:54.295768 42701 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-2315/.minikube/files for local assets ...
I1212 19:40:54.295875 42701 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
I1212 19:40:54.295955 42701 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
I1212 19:40:54.296004 42701 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
I1212 19:40:54.303563 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
I1212 19:40:54.321914 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
I1212 19:40:54.339505 42701 start.go:296] duration metric: took 171.544506ms for postStartSetup
I1212 19:40:54.339885 42701 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
I1212 19:40:54.356827 42701 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/config.json ...
I1212 19:40:54.357090 42701 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1212 19:40:54.357127 42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 19:40:54.373329 42701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
I1212 19:40:54.476716 42701 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1212 19:40:54.482048 42701 start.go:128] duration metric: took 9.936309777s to createHost
I1212 19:40:54.482063 42701 start.go:83] releasing machines lock for "functional-384006", held for 9.936414126s
I1212 19:40:54.482141 42701 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-384006
I1212 19:40:54.502216 42701 out.go:179] * Found network options:
I1212 19:40:54.505144 42701 out.go:179] - HTTP_PROXY=localhost:46339
W1212 19:40:54.508027 42701 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1212 19:40:54.510991 42701 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1212 19:40:54.514023 42701 ssh_runner.go:195] Run: cat /version.json
I1212 19:40:54.514073 42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 19:40:54.514113 42701 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1212 19:40:54.514164 42701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-384006
I1212 19:40:54.532511 42701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
I1212 19:40:54.534156 42701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22112-2315/.minikube/machines/functional-384006/id_rsa Username:docker}
I1212 19:40:54.635420 42701 ssh_runner.go:195] Run: systemctl --version
I1212 19:40:54.729544 42701 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1212 19:40:54.733676 42701 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1212 19:40:54.733738 42701 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1212 19:40:54.759295 42701 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1212 19:40:54.759308 42701 start.go:496] detecting cgroup driver to use...
I1212 19:40:54.759338 42701 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1212 19:40:54.759382 42701 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1212 19:40:54.774145 42701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1212 19:40:54.786499 42701 docker.go:218] disabling cri-docker service (if available) ...
I1212 19:40:54.786548 42701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1212 19:40:54.803147 42701 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1212 19:40:54.820882 42701 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1212 19:40:54.938578 42701 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1212 19:40:55.057426 42701 docker.go:234] disabling docker service ...
I1212 19:40:55.057500 42701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1212 19:40:55.092468 42701 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1212 19:40:55.109954 42701 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1212 19:40:55.273280 42701 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1212 19:40:55.382582 42701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1212 19:40:55.396029 42701 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 19:40:55.410031 42701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1212 19:40:55.418655 42701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1212 19:40:55.427275 42701 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1212 19:40:55.427340 42701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1212 19:40:55.436177 42701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 19:40:55.444614 42701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1212 19:40:55.453070 42701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 19:40:55.461686 42701 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1212 19:40:55.469786 42701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1212 19:40:55.478293 42701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1212 19:40:55.486712 42701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1212 19:40:55.495711 42701 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1212 19:40:55.503162 42701 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1212 19:40:55.510493 42701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 19:40:55.632911 42701 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1212 19:40:55.766297 42701 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1212 19:40:55.766387 42701 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1212 19:40:55.770388 42701 start.go:564] Will wait 60s for crictl version
I1212 19:40:55.770441 42701 ssh_runner.go:195] Run: which crictl
I1212 19:40:55.774491 42701 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1212 19:40:55.799148 42701 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1212 19:40:55.799222 42701 ssh_runner.go:195] Run: containerd --version
I1212 19:40:55.818936 42701 ssh_runner.go:195] Run: containerd --version
I1212 19:40:55.846395 42701 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
I1212 19:40:55.849236 42701 cli_runner.go:164] Run: docker network inspect functional-384006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 19:40:55.865344 42701 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1212 19:40:55.869370 42701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 19:40:55.878997 42701 kubeadm.go:884] updating cluster {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1212 19:40:55.879103 42701 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1212 19:40:55.879161 42701 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 19:40:55.903551 42701 containerd.go:627] all images are preloaded for containerd runtime.
I1212 19:40:55.903561 42701 containerd.go:534] Images already preloaded, skipping extraction
I1212 19:40:55.903621 42701 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 19:40:55.929106 42701 containerd.go:627] all images are preloaded for containerd runtime.
I1212 19:40:55.929117 42701 cache_images.go:86] Images are preloaded, skipping loading
I1212 19:40:55.929124 42701 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1212 19:40:55.929254 42701 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-384006 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1212 19:40:55.929318 42701 ssh_runner.go:195] Run: sudo crictl info
I1212 19:40:55.954490 42701 cni.go:84] Creating CNI manager for ""
I1212 19:40:55.954500 42701 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1212 19:40:55.954521 42701 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1212 19:40:55.954541 42701 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-384006 NodeName:functional-384006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1212 19:40:55.954644 42701 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-384006"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1212 19:40:55.954707 42701 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1212 19:40:55.962641 42701 binaries.go:51] Found k8s binaries, skipping transfer
I1212 19:40:55.962697 42701 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1212 19:40:55.970323 42701 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1212 19:40:55.982777 42701 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1212 19:40:55.995589 42701 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1212 19:40:56.011018 42701 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1212 19:40:56.015555 42701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 19:40:56.025946 42701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 19:40:56.153914 42701 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1212 19:40:56.173527 42701 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006 for IP: 192.168.49.2
I1212 19:40:56.173536 42701 certs.go:195] generating shared ca certs ...
I1212 19:40:56.173570 42701 certs.go:227] acquiring lock for ca certs: {Name:mk39256c1929fe0803d745b94bd58afc348a7e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:40:56.173724 42701 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key
I1212 19:40:56.173769 42701 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key
I1212 19:40:56.173775 42701 certs.go:257] generating profile certs ...
I1212 19:40:56.173828 42701 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key
I1212 19:40:56.173844 42701 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt with IP's: []
I1212 19:40:56.532866 42701 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt ...
I1212 19:40:56.532887 42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.crt: {Name:mkfc9e34b0f1c99d91593dc19a049aba37bdd405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:40:56.533085 42701 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key ...
I1212 19:40:56.533092 42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/client.key: {Name:mkb01f552e965f3b10de445d0acb1cf236c8c366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:40:56.533179 42701 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b
I1212 19:40:56.533191 42701 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt.6e756d1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1212 19:40:56.761942 42701 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt.6e756d1b ...
I1212 19:40:56.761960 42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt.6e756d1b: {Name:mk6291efbc5837d9af5a2a86e1048ea5beaa00e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:40:56.762146 42701 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b ...
I1212 19:40:56.762154 42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b: {Name:mkc6b03f5b97f234184e0c9a10a5beb4e3f40854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:40:56.762248 42701 certs.go:382] copying /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt.6e756d1b -> /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt
I1212 19:40:56.762322 42701 certs.go:386] copying /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key.6e756d1b -> /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key
I1212 19:40:56.762375 42701 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key
I1212 19:40:56.762387 42701 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt with IP's: []
I1212 19:40:57.111819 42701 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt ...
I1212 19:40:57.111845 42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt: {Name:mk61b30049c330c3b79396114753a2e33f49d208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:40:57.112053 42701 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key ...
I1212 19:40:57.112062 42701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key: {Name:mk60a8c5379d01c0c15152b95a875b9b71f78bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 19:40:57.112281 42701 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem (1338 bytes)
W1212 19:40:57.112330 42701 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
I1212 19:40:57.112339 42701 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca-key.pem (1675 bytes)
I1212 19:40:57.112379 42701 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/ca.pem (1078 bytes)
I1212 19:40:57.112406 42701 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/cert.pem (1123 bytes)
I1212 19:40:57.112429 42701 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/certs/key.pem (1679 bytes)
I1212 19:40:57.112489 42701 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
I1212 19:40:57.113091 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1212 19:40:57.131070 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1212 19:40:57.150569 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1212 19:40:57.168806 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1212 19:40:57.186548 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1212 19:40:57.204581 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1212 19:40:57.223308 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1212 19:40:57.241998 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/profiles/functional-384006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1212 19:40:57.259628 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
I1212 19:40:57.277661 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1212 19:40:57.295169 42701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
I1212 19:40:57.312863 42701 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1212 19:40:57.325510 42701 ssh_runner.go:195] Run: openssl version
I1212 19:40:57.332523 42701 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1212 19:40:57.340004 42701 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1212 19:40:57.347406 42701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1212 19:40:57.351185 42701 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
I1212 19:40:57.351240 42701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1212 19:40:57.392065 42701 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1212 19:40:57.399583 42701 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1212 19:40:57.406996 42701 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
I1212 19:40:57.414506 42701 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
I1212 19:40:57.422335 42701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
I1212 19:40:57.426019 42701 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:40 /usr/share/ca-certificates/4120.pem
I1212 19:40:57.426071 42701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
I1212 19:40:57.467038 42701 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1212 19:40:57.474748 42701 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
I1212 19:40:57.482206 42701 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
I1212 19:40:57.489668 42701 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
I1212 19:40:57.497440 42701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
I1212 19:40:57.501261 42701 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:40 /usr/share/ca-certificates/41202.pem
I1212 19:40:57.501316 42701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
I1212 19:40:57.542356 42701 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1212 19:40:57.549976 42701 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
I1212 19:40:57.557405 42701 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1212 19:40:57.560910 42701 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1212 19:40:57.560952 42701 kubeadm.go:401] StartCluster: {Name:functional-384006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-384006 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1212 19:40:57.561023 42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1212 19:40:57.561086 42701 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1212 19:40:57.590406 42701 cri.go:89] found id: ""
I1212 19:40:57.590467 42701 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1212 19:40:57.598052 42701 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1212 19:40:57.605581 42701 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1212 19:40:57.605640 42701 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1212 19:40:57.613371 42701 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1212 19:40:57.613380 42701 kubeadm.go:158] found existing configuration files:
I1212 19:40:57.613445 42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1212 19:40:57.621043 42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1212 19:40:57.621095 42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1212 19:40:57.628172 42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1212 19:40:57.635356 42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1212 19:40:57.635410 42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1212 19:40:57.642343 42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1212 19:40:57.649795 42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1212 19:40:57.649850 42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1212 19:40:57.657122 42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1212 19:40:57.664772 42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1212 19:40:57.664828 42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1212 19:40:57.672341 42701 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1212 19:40:57.733985 42701 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1212 19:40:57.734474 42701 kubeadm.go:319] [preflight] Running pre-flight checks
I1212 19:40:57.814083 42701 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1212 19:40:57.814149 42701 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1212 19:40:57.814183 42701 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1212 19:40:57.814227 42701 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1212 19:40:57.814274 42701 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1212 19:40:57.814321 42701 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1212 19:40:57.814372 42701 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1212 19:40:57.814419 42701 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1212 19:40:57.814469 42701 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1212 19:40:57.814514 42701 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1212 19:40:57.814577 42701 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1212 19:40:57.814622 42701 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1212 19:40:57.885000 42701 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1212 19:40:57.885127 42701 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1212 19:40:57.885238 42701 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1212 19:40:57.896539 42701 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1212 19:40:57.903138 42701 out.go:252] - Generating certificates and keys ...
I1212 19:40:57.903228 42701 kubeadm.go:319] [certs] Using existing ca certificate authority
I1212 19:40:57.903298 42701 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1212 19:40:58.467454 42701 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1212 19:40:59.248213 42701 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1212 19:40:59.476271 42701 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1212 19:40:59.725706 42701 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1212 19:41:00.097009 42701 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1212 19:41:00.097142 42701 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-384006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1212 19:41:00.200942 42701 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1212 19:41:00.201127 42701 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-384006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1212 19:41:00.321004 42701 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1212 19:41:00.473525 42701 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1212 19:41:00.654531 42701 kubeadm.go:319] [certs] Generating "sa" key and public key
I1212 19:41:00.654906 42701 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1212 19:41:00.789773 42701 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1212 19:41:00.880064 42701 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1212 19:41:01.176824 42701 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1212 19:41:01.485183 42701 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1212 19:41:01.536937 42701 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1212 19:41:01.537645 42701 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1212 19:41:01.540461 42701 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1212 19:41:01.544024 42701 out.go:252] - Booting up control plane ...
I1212 19:41:01.544119 42701 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1212 19:41:01.544200 42701 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1212 19:41:01.544271 42701 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1212 19:41:01.559376 42701 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1212 19:41:01.559477 42701 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1212 19:41:01.567380 42701 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1212 19:41:01.567830 42701 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1212 19:41:01.568153 42701 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1212 19:41:01.692374 42701 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1212 19:41:01.692516 42701 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1212 19:45:01.691912 42701 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000209188s
I1212 19:45:01.691940 42701 kubeadm.go:319]
I1212 19:45:01.692010 42701 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1212 19:45:01.692051 42701 kubeadm.go:319] - The kubelet is not running
I1212 19:45:01.692686 42701 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1212 19:45:01.692702 42701 kubeadm.go:319]
I1212 19:45:01.693002 42701 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1212 19:45:01.693315 42701 kubeadm.go:319] - 'systemctl status kubelet'
I1212 19:45:01.693381 42701 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1212 19:45:01.693398 42701 kubeadm.go:319]
I1212 19:45:01.701221 42701 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1212 19:45:01.701624 42701 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1212 19:45:01.701726 42701 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1212 19:45:01.701973 42701 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1212 19:45:01.701978 42701 kubeadm.go:319]
I1212 19:45:01.702041 42701 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1212 19:45:01.702160 42701 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-384006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-384006 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000209188s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I1212 19:45:01.702497 42701 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1212 19:45:02.120065 42701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1212 19:45:02.138262 42701 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1212 19:45:02.138316 42701 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1212 19:45:02.146370 42701 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1212 19:45:02.146383 42701 kubeadm.go:158] found existing configuration files:
I1212 19:45:02.146436 42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1212 19:45:02.157210 42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1212 19:45:02.157265 42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1212 19:45:02.165330 42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1212 19:45:02.173856 42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1212 19:45:02.173917 42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1212 19:45:02.183915 42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1212 19:45:02.191989 42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1212 19:45:02.192049 42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1212 19:45:02.200029 42701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1212 19:45:02.210376 42701 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1212 19:45:02.210433 42701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1212 19:45:02.218868 42701 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1212 19:45:02.351777 42701 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1212 19:45:02.352255 42701 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1212 19:45:02.428320 42701 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1212 19:49:04.286204 42701 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1212 19:49:04.286222 42701 kubeadm.go:319]
I1212 19:49:04.286293 42701 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1212 19:49:04.290915 42701 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1212 19:49:04.290970 42701 kubeadm.go:319] [preflight] Running pre-flight checks
I1212 19:49:04.291087 42701 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1212 19:49:04.291149 42701 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1212 19:49:04.291183 42701 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1212 19:49:04.291246 42701 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1212 19:49:04.291297 42701 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1212 19:49:04.291344 42701 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1212 19:49:04.291391 42701 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1212 19:49:04.291437 42701 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1212 19:49:04.291486 42701 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1212 19:49:04.291530 42701 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1212 19:49:04.291577 42701 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1212 19:49:04.291624 42701 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1212 19:49:04.291695 42701 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1212 19:49:04.291789 42701 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1212 19:49:04.291891 42701 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1212 19:49:04.291953 42701 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1212 19:49:04.294973 42701 out.go:252] - Generating certificates and keys ...
I1212 19:49:04.295045 42701 kubeadm.go:319] [certs] Using existing ca certificate authority
I1212 19:49:04.295113 42701 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1212 19:49:04.295197 42701 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1212 19:49:04.295257 42701 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1212 19:49:04.295325 42701 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1212 19:49:04.295378 42701 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1212 19:49:04.295440 42701 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1212 19:49:04.295500 42701 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1212 19:49:04.295574 42701 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1212 19:49:04.295645 42701 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1212 19:49:04.295682 42701 kubeadm.go:319] [certs] Using the existing "sa" key
I1212 19:49:04.295736 42701 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1212 19:49:04.295786 42701 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1212 19:49:04.295858 42701 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1212 19:49:04.295910 42701 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1212 19:49:04.295972 42701 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1212 19:49:04.296026 42701 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1212 19:49:04.296108 42701 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1212 19:49:04.296173 42701 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1212 19:49:04.299042 42701 out.go:252] - Booting up control plane ...
I1212 19:49:04.299151 42701 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1212 19:49:04.299227 42701 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1212 19:49:04.299296 42701 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1212 19:49:04.299403 42701 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1212 19:49:04.299495 42701 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1212 19:49:04.299628 42701 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1212 19:49:04.299730 42701 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1212 19:49:04.299772 42701 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1212 19:49:04.299968 42701 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1212 19:49:04.300085 42701 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1212 19:49:04.300151 42701 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001214076s
I1212 19:49:04.300153 42701 kubeadm.go:319]
I1212 19:49:04.300215 42701 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1212 19:49:04.300255 42701 kubeadm.go:319] - The kubelet is not running
I1212 19:49:04.300366 42701 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1212 19:49:04.300369 42701 kubeadm.go:319]
I1212 19:49:04.300480 42701 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1212 19:49:04.300511 42701 kubeadm.go:319] - 'systemctl status kubelet'
I1212 19:49:04.300541 42701 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1212 19:49:04.300595 42701 kubeadm.go:319]
I1212 19:49:04.300596 42701 kubeadm.go:403] duration metric: took 8m6.739647745s to StartCluster
I1212 19:49:04.300625 42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1212 19:49:04.300687 42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1212 19:49:04.325747 42701 cri.go:89] found id: ""
I1212 19:49:04.325762 42701 logs.go:282] 0 containers: []
W1212 19:49:04.325774 42701 logs.go:284] No container was found matching "kube-apiserver"
I1212 19:49:04.325780 42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1212 19:49:04.325854 42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1212 19:49:04.357078 42701 cri.go:89] found id: ""
I1212 19:49:04.357093 42701 logs.go:282] 0 containers: []
W1212 19:49:04.357100 42701 logs.go:284] No container was found matching "etcd"
I1212 19:49:04.357105 42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1212 19:49:04.357167 42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1212 19:49:04.380493 42701 cri.go:89] found id: ""
I1212 19:49:04.380508 42701 logs.go:282] 0 containers: []
W1212 19:49:04.380515 42701 logs.go:284] No container was found matching "coredns"
I1212 19:49:04.380520 42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1212 19:49:04.380581 42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1212 19:49:04.403672 42701 cri.go:89] found id: ""
I1212 19:49:04.403686 42701 logs.go:282] 0 containers: []
W1212 19:49:04.403693 42701 logs.go:284] No container was found matching "kube-scheduler"
I1212 19:49:04.403698 42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1212 19:49:04.403752 42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1212 19:49:04.428821 42701 cri.go:89] found id: ""
I1212 19:49:04.428834 42701 logs.go:282] 0 containers: []
W1212 19:49:04.428841 42701 logs.go:284] No container was found matching "kube-proxy"
I1212 19:49:04.428847 42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1212 19:49:04.428902 42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1212 19:49:04.456883 42701 cri.go:89] found id: ""
I1212 19:49:04.456896 42701 logs.go:282] 0 containers: []
W1212 19:49:04.456904 42701 logs.go:284] No container was found matching "kube-controller-manager"
I1212 19:49:04.456909 42701 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1212 19:49:04.456964 42701 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1212 19:49:04.484233 42701 cri.go:89] found id: ""
I1212 19:49:04.484246 42701 logs.go:282] 0 containers: []
W1212 19:49:04.484253 42701 logs.go:284] No container was found matching "kindnet"
I1212 19:49:04.484260 42701 logs.go:123] Gathering logs for kubelet ...
I1212 19:49:04.484270 42701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1212 19:49:04.540591 42701 logs.go:123] Gathering logs for dmesg ...
I1212 19:49:04.540608 42701 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1212 19:49:04.551242 42701 logs.go:123] Gathering logs for describe nodes ...
I1212 19:49:04.551260 42701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1212 19:49:04.617266 42701 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1212 19:49:04.608747 4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 19:49:04.609239 4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 19:49:04.610858 4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 19:49:04.611353 4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 19:49:04.613012 4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1212 19:49:04.608747 4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 19:49:04.609239 4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 19:49:04.610858 4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 19:49:04.611353 4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 19:49:04.613012 4749 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1212 19:49:04.617276 42701 logs.go:123] Gathering logs for containerd ...
I1212 19:49:04.617287 42701 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1212 19:49:04.653829 42701 logs.go:123] Gathering logs for container status ...
I1212 19:49:04.653847 42701 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1212 19:49:04.679477 42701 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001214076s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1212 19:49:04.679515 42701 out.go:285] *
W1212 19:49:04.679572 42701 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001214076s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1212 19:49:04.679812 42701 out.go:285] *
W1212 19:49:04.682301 42701 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1212 19:49:04.688334 42701 out.go:203]
W1212 19:49:04.691068 42701 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001214076s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1212 19:49:04.691131 42701 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1212 19:49:04.691154 42701 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1212 19:49:04.694294 42701 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709459179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709520478Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709639309Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709713875Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709773197Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709835357Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709890494Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.709947608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.710012640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.710102837Z" level=info msg="Connect containerd service"
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.710479975Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.711194154Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.726667191Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.726904075Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.726846148Z" level=info msg="Start subscribing containerd event"
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.727211455Z" level=info msg="Start recovering state"
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763003138Z" level=info msg="Start event monitor"
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763202673Z" level=info msg="Start cni network conf syncer for default"
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763269945Z" level=info msg="Start streaming server"
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763341270Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763399033Z" level=info msg="runtime interface starting up..."
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763452496Z" level=info msg="starting plugins..."
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.763514426Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 12 19:40:55 functional-384006 systemd[1]: Started containerd.service - containerd container runtime.
Dec 12 19:40:55 functional-384006 containerd[762]: time="2025-12-12T19:40:55.766218253Z" level=info msg="containerd successfully booted in 0.081302s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1212 19:49:05.646418 4864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 19:49:05.646997 4864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 19:49:05.648663 4864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 19:49:05.649137 4864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 19:49:05.650689 4864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec12 19:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014827] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.497798] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.037128] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.743560] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.524348] kauditd_printk_skb: 36 callbacks suppressed
==> kernel <==
19:49:05 up 31 min, 0 user, load average: 0.04, 0.45, 0.74
Linux functional-384006 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 12 19:49:02 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 19:49:02 functional-384006 kubelet[4668]: E1212 19:49:02.740368 4668 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 12 19:49:02 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 12 19:49:02 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 12 19:49:03 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 12 19:49:03 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 19:49:03 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 19:49:03 functional-384006 kubelet[4673]: E1212 19:49:03.489009 4673 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 12 19:49:03 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 12 19:49:03 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 12 19:49:04 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 12 19:49:04 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 19:49:04 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 19:49:04 functional-384006 kubelet[4678]: E1212 19:49:04.240085 4678 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 12 19:49:04 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 12 19:49:04 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 12 19:49:04 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 12 19:49:04 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 19:49:04 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 19:49:05 functional-384006 kubelet[4778]: E1212 19:49:05.004346 4778 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 12 19:49:05 functional-384006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 12 19:49:05 functional-384006 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 12 19:49:05 functional-384006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 12 19:49:05 functional-384006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 19:49:05 functional-384006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-384006 -n functional-384006: exit status 6 (326.120732ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1212 19:49:06.099155 48365 status.go:458] kubeconfig endpoint: get endpoint: "functional-384006" does not appear in /home/jenkins/minikube-integration/22112-2315/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-384006" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.85s)