=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1213 08:39:58.313259 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:42:14.443347 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:42:42.155265 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:51.888704 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:51.895000 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:51.906346 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:51.927795 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:51.969160 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:52.050526 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:52.211939 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:52.533610 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:53.175656 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:54.457285 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:43:57.019661 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:02.141004 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:12.382285 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:44:32.863711 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:45:13.825147 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:46:35.747718 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-049633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1213 08:47:14.443169 4120 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/addons-289425/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m18.889832535s)
-- stdout --
* [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22128
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-074420" primary control-plane node in "functional-074420" cluster
* Pulling base image v0.0.48-1765275396-22083 ...
* Found network options:
- HTTP_PROXY=localhost:33459
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:33459 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-074420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-074420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001119929s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001208507s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001208507s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-074420
helpers_test.go:244: (dbg) docker inspect functional-074420:
-- stdout --
[
{
"Id": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
"Created": "2025-12-13T08:39:40.050933605Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 42410,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-13T08:39:40.114566965Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
"ResolvConfPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hostname",
"HostsPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/hosts",
"LogPath": "/var/lib/docker/containers/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a/662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a-json.log",
"Name": "/functional-074420",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-074420:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-074420",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "662fb3d52b3ef708bdfc9586215786123b364daa40a9ffcdf12a6dc6b3517e5a",
"LowerDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9-init/diff:/var/lib/docker/overlay2/bf2b9b85dc2e0bd14e050a3050145321dfbaee0a9aa8a5528cbacc402405e083/diff",
"MergedDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/merged",
"UpperDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/diff",
"WorkDir": "/var/lib/docker/overlay2/9087905bb6c7c9cd8fa971ade1f83e013baa834bde395048ca71fb0bcded27e9/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-074420",
"Source": "/var/lib/docker/volumes/functional-074420/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-074420",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-074420",
"name.minikube.sigs.k8s.io": "functional-074420",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "6a7c4c379ad4ac743f7de440acbcfe1a193355a877316af502b30db1cca10b84",
"SandboxKey": "/var/run/docker/netns/6a7c4c379ad4",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32788"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32789"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32792"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32790"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32791"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-074420": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "ca:e0:c5:f8:aa:d2",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "eec4f1a46a6eb16bb38ec770212e92101cab5f78b94537593daea613e2505eff",
"EndpointID": "4e70a61a5b70fd39df8226c9a60e6916878df90eab8e3f359582e97836d46dd3",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-074420",
"662fb3d52b3e"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-074420 -n functional-074420: exit status 6 (313.348127ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1213 08:47:54.172969 47493 status.go:458] kubeconfig endpoint: get endpoint: "functional-074420" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-074420 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ mount │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdspecific-port3636403034/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ │
│ ssh │ functional-049633 ssh findmnt -T /mount-9p | grep 9p │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ │
│ ssh │ functional-049633 ssh findmnt -T /mount-9p | grep 9p │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ ssh │ functional-049633 ssh -- ls -la /mount-9p │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ ssh │ functional-049633 ssh sudo umount -f /mount-9p │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ │
│ mount │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount1 --alsologtostderr -v=1 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ │
│ mount │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount3 --alsologtostderr -v=1 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ │
│ mount │ -p functional-049633 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1763790089/001:/mount2 --alsologtostderr -v=1 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ │
│ ssh │ functional-049633 ssh findmnt -T /mount1 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ │
│ update-context │ functional-049633 update-context --alsologtostderr -v=2 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ update-context │ functional-049633 update-context --alsologtostderr -v=2 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ update-context │ functional-049633 update-context --alsologtostderr -v=2 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ image │ functional-049633 image ls --format short --alsologtostderr │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ ssh │ functional-049633 ssh findmnt -T /mount1 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ ssh │ functional-049633 ssh pgrep buildkitd │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ │
│ ssh │ functional-049633 ssh findmnt -T /mount2 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ ssh │ functional-049633 ssh findmnt -T /mount3 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ image │ functional-049633 image build -t localhost/my-image:functional-049633 testdata/build --alsologtostderr │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ mount │ -p functional-049633 --kill=true │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ │
│ image │ functional-049633 image ls --format yaml --alsologtostderr │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ image │ functional-049633 image ls --format json --alsologtostderr │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ image │ functional-049633 image ls --format table --alsologtostderr │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ image │ functional-049633 image ls │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ delete │ -p functional-049633 │ functional-049633 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ 13 Dec 25 08:39 UTC │
│ start │ -p functional-074420 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-074420 │ jenkins │ v1.37.0 │ 13 Dec 25 08:39 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/13 08:39:35
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1213 08:39:35.008642 42018 out.go:360] Setting OutFile to fd 1 ...
I1213 08:39:35.008789 42018 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:35.008793 42018 out.go:374] Setting ErrFile to fd 2...
I1213 08:39:35.008798 42018 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1213 08:39:35.009106 42018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22128-2315/.minikube/bin
I1213 08:39:35.009625 42018 out.go:368] Setting JSON to false
I1213 08:39:35.010541 42018 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1327,"bootTime":1765613848,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I1213 08:39:35.010614 42018 start.go:143] virtualization:
I1213 08:39:35.015059 42018 out.go:179] * [functional-074420] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1213 08:39:35.019753 42018 out.go:179] - MINIKUBE_LOCATION=22128
I1213 08:39:35.019904 42018 notify.go:221] Checking for updates...
I1213 08:39:35.026824 42018 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1213 08:39:35.030087 42018 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22128-2315/kubeconfig
I1213 08:39:35.033754 42018 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22128-2315/.minikube
I1213 08:39:35.036831 42018 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1213 08:39:35.039923 42018 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1213 08:39:35.043137 42018 driver.go:422] Setting default libvirt URI to qemu:///system
I1213 08:39:35.071631 42018 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1213 08:39:35.071745 42018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 08:39:35.131799 42018 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 08:39:35.121544673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 08:39:35.131895 42018 docker.go:319] overlay module found
I1213 08:39:35.135130 42018 out.go:179] * Using the docker driver based on user configuration
I1213 08:39:35.137936 42018 start.go:309] selected driver: docker
I1213 08:39:35.137945 42018 start.go:927] validating driver "docker" against <nil>
I1213 08:39:35.137971 42018 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1213 08:39:35.138689 42018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1213 08:39:35.193292 42018 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-13 08:39:35.184152358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1213 08:39:35.193431 42018 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1213 08:39:35.193663 42018 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1213 08:39:35.196727 42018 out.go:179] * Using Docker driver with root privileges
I1213 08:39:35.199650 42018 cni.go:84] Creating CNI manager for ""
I1213 08:39:35.199701 42018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1213 08:39:35.199708 42018 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1213 08:39:35.199792 42018 start.go:353] cluster config:
{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 08:39:35.202870 42018 out.go:179] * Starting "functional-074420" primary control-plane node in "functional-074420" cluster
I1213 08:39:35.205757 42018 cache.go:134] Beginning downloading kic base image for docker with containerd
I1213 08:39:35.208764 42018 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
I1213 08:39:35.211618 42018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 08:39:35.211683 42018 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
I1213 08:39:35.211692 42018 cache.go:65] Caching tarball of preloaded images
I1213 08:39:35.211690 42018 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
I1213 08:39:35.211782 42018 preload.go:238] Found /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1213 08:39:35.211791 42018 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
I1213 08:39:35.212122 42018 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
I1213 08:39:35.212141 42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json: {Name:mk487183f82ca2b9ae9675e1dbf064ee3afe4870 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:39:35.231325 42018 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
I1213 08:39:35.231337 42018 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
I1213 08:39:35.231358 42018 cache.go:243] Successfully downloaded all kic artifacts
I1213 08:39:35.231387 42018 start.go:360] acquireMachinesLock for functional-074420: {Name:mk9a8356bf81e58530d2c2996b4da0b7487171c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1213 08:39:35.231504 42018 start.go:364] duration metric: took 103.008µs to acquireMachinesLock for "functional-074420"
I1213 08:39:35.231560 42018 start.go:93] Provisioning new machine with config: &{Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1213 08:39:35.231639 42018 start.go:125] createHost starting for "" (driver="docker")
I1213 08:39:35.234997 42018 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1213 08:39:35.235294 42018 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:33459 to docker env.
I1213 08:39:35.235318 42018 start.go:159] libmachine.API.Create for "functional-074420" (driver="docker")
I1213 08:39:35.235340 42018 client.go:173] LocalClient.Create starting
I1213 08:39:35.235401 42018 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem
I1213 08:39:35.235441 42018 main.go:143] libmachine: Decoding PEM data...
I1213 08:39:35.235458 42018 main.go:143] libmachine: Parsing certificate...
I1213 08:39:35.235539 42018 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem
I1213 08:39:35.235572 42018 main.go:143] libmachine: Decoding PEM data...
I1213 08:39:35.235583 42018 main.go:143] libmachine: Parsing certificate...
I1213 08:39:35.235970 42018 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 08:39:35.252270 42018 cli_runner.go:211] docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 08:39:35.252351 42018 network_create.go:284] running [docker network inspect functional-074420] to gather additional debugging logs...
I1213 08:39:35.252368 42018 cli_runner.go:164] Run: docker network inspect functional-074420
W1213 08:39:35.267395 42018 cli_runner.go:211] docker network inspect functional-074420 returned with exit code 1
I1213 08:39:35.267433 42018 network_create.go:287] error running [docker network inspect functional-074420]: docker network inspect functional-074420: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-074420 not found
I1213 08:39:35.267445 42018 network_create.go:289] output of [docker network inspect functional-074420]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-074420 not found
** /stderr **
I1213 08:39:35.267556 42018 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 08:39:35.284421 42018 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400191fd10}
I1213 08:39:35.284449 42018 network_create.go:124] attempt to create docker network functional-074420 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1213 08:39:35.284498 42018 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-074420 functional-074420
I1213 08:39:35.350529 42018 network_create.go:108] docker network functional-074420 192.168.49.0/24 created
I1213 08:39:35.350550 42018 kic.go:121] calculated static IP "192.168.49.2" for the "functional-074420" container
I1213 08:39:35.350622 42018 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1213 08:39:35.365623 42018 cli_runner.go:164] Run: docker volume create functional-074420 --label name.minikube.sigs.k8s.io=functional-074420 --label created_by.minikube.sigs.k8s.io=true
I1213 08:39:35.390202 42018 oci.go:103] Successfully created a docker volume functional-074420
I1213 08:39:35.390282 42018 cli_runner.go:164] Run: docker run --rm --name functional-074420-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-074420 --entrypoint /usr/bin/test -v functional-074420:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
I1213 08:39:35.929253 42018 oci.go:107] Successfully prepared a docker volume functional-074420
I1213 08:39:35.929315 42018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 08:39:35.929323 42018 kic.go:194] Starting extracting preloaded images to volume ...
I1213 08:39:35.929391 42018 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-074420:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
I1213 08:39:39.978855 42018 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22128-2315/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-074420:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.049431633s)
I1213 08:39:39.978882 42018 kic.go:203] duration metric: took 4.049555975s to extract preloaded images to volume ...
W1213 08:39:39.979029 42018 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1213 08:39:39.979126 42018 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1213 08:39:40.036604 42018 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-074420 --name functional-074420 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-074420 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-074420 --network functional-074420 --ip 192.168.49.2 --volume functional-074420:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
I1213 08:39:40.331411 42018 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Running}}
I1213 08:39:40.354871 42018 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
I1213 08:39:40.379568 42018 cli_runner.go:164] Run: docker exec functional-074420 stat /var/lib/dpkg/alternatives/iptables
I1213 08:39:40.427228 42018 oci.go:144] the created container "functional-074420" has a running status.
I1213 08:39:40.427248 42018 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa...
I1213 08:39:40.509985 42018 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1213 08:39:40.532303 42018 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
I1213 08:39:40.558663 42018 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1213 08:39:40.558674 42018 kic_runner.go:114] Args: [docker exec --privileged functional-074420 chown docker:docker /home/docker/.ssh/authorized_keys]
I1213 08:39:40.612847 42018 cli_runner.go:164] Run: docker container inspect functional-074420 --format={{.State.Status}}
I1213 08:39:40.640034 42018 machine.go:94] provisionDockerMachine start ...
I1213 08:39:40.640113 42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 08:39:40.667930 42018 main.go:143] libmachine: Using SSH client type: native
I1213 08:39:40.668249 42018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1213 08:39:40.668256 42018 main.go:143] libmachine: About to run SSH command:
hostname
I1213 08:39:40.668891 42018 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53326->127.0.0.1:32788: read: connection reset by peer
I1213 08:39:43.818892 42018 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
I1213 08:39:43.818906 42018 ubuntu.go:182] provisioning hostname "functional-074420"
I1213 08:39:43.818965 42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 08:39:43.836575 42018 main.go:143] libmachine: Using SSH client type: native
I1213 08:39:43.836879 42018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1213 08:39:43.836887 42018 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-074420 && echo "functional-074420" | sudo tee /etc/hostname
I1213 08:39:43.996329 42018 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-074420
I1213 08:39:43.996404 42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 08:39:44.017128 42018 main.go:143] libmachine: Using SSH client type: native
I1213 08:39:44.017431 42018 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1213 08:39:44.017445 42018 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-074420' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-074420/g' /etc/hosts;
else
echo '127.0.1.1 functional-074420' | sudo tee -a /etc/hosts;
fi
fi
I1213 08:39:44.168395 42018 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1213 08:39:44.168410 42018 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22128-2315/.minikube CaCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22128-2315/.minikube}
I1213 08:39:44.168430 42018 ubuntu.go:190] setting up certificates
I1213 08:39:44.168439 42018 provision.go:84] configureAuth start
I1213 08:39:44.168498 42018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
I1213 08:39:44.190620 42018 provision.go:143] copyHostCerts
I1213 08:39:44.190681 42018 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem, removing ...
I1213 08:39:44.190689 42018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem
I1213 08:39:44.190766 42018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/ca.pem (1082 bytes)
I1213 08:39:44.190863 42018 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem, removing ...
I1213 08:39:44.190867 42018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem
I1213 08:39:44.190893 42018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/cert.pem (1123 bytes)
I1213 08:39:44.190947 42018 exec_runner.go:144] found /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem, removing ...
I1213 08:39:44.190951 42018 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem
I1213 08:39:44.190973 42018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22128-2315/.minikube/key.pem (1675 bytes)
I1213 08:39:44.191065 42018 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem org=jenkins.functional-074420 san=[127.0.0.1 192.168.49.2 functional-074420 localhost minikube]
I1213 08:39:44.560397 42018 provision.go:177] copyRemoteCerts
I1213 08:39:44.560447 42018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1213 08:39:44.560486 42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 08:39:44.577528 42018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
I1213 08:39:44.683250 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1213 08:39:44.701053 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1213 08:39:44.718804 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1213 08:39:44.735986 42018 provision.go:87] duration metric: took 567.525212ms to configureAuth
I1213 08:39:44.736003 42018 ubuntu.go:206] setting minikube options for container-runtime
I1213 08:39:44.736188 42018 config.go:182] Loaded profile config "functional-074420": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1213 08:39:44.736194 42018 machine.go:97] duration metric: took 4.096149491s to provisionDockerMachine
I1213 08:39:44.736199 42018 client.go:176] duration metric: took 9.500854047s to LocalClient.Create
I1213 08:39:44.736211 42018 start.go:167] duration metric: took 9.500893613s to libmachine.API.Create "functional-074420"
I1213 08:39:44.736217 42018 start.go:293] postStartSetup for "functional-074420" (driver="docker")
I1213 08:39:44.736227 42018 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1213 08:39:44.736272 42018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1213 08:39:44.736315 42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 08:39:44.752885 42018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
I1213 08:39:44.855450 42018 ssh_runner.go:195] Run: cat /etc/os-release
I1213 08:39:44.858746 42018 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1213 08:39:44.858763 42018 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1213 08:39:44.858773 42018 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/addons for local assets ...
I1213 08:39:44.858826 42018 filesync.go:126] Scanning /home/jenkins/minikube-integration/22128-2315/.minikube/files for local assets ...
I1213 08:39:44.858913 42018 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem -> 41202.pem in /etc/ssl/certs
I1213 08:39:44.858998 42018 filesync.go:149] local asset: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts -> hosts in /etc/test/nested/copy/4120
I1213 08:39:44.859038 42018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4120
I1213 08:39:44.866644 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /etc/ssl/certs/41202.pem (1708 bytes)
I1213 08:39:44.883478 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/test/nested/copy/4120/hosts --> /etc/test/nested/copy/4120/hosts (40 bytes)
I1213 08:39:44.900926 42018 start.go:296] duration metric: took 164.696307ms for postStartSetup
I1213 08:39:44.901268 42018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
I1213 08:39:44.919358 42018 profile.go:143] Saving config to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/config.json ...
I1213 08:39:44.919750 42018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1213 08:39:44.919803 42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 08:39:44.936164 42018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
I1213 08:39:45.038365 42018 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1213 08:39:45.060110 42018 start.go:128] duration metric: took 9.828441972s to createHost
I1213 08:39:45.060130 42018 start.go:83] releasing machines lock for "functional-074420", held for 9.828617997s
I1213 08:39:45.060227 42018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-074420
I1213 08:39:45.103915 42018 out.go:179] * Found network options:
I1213 08:39:45.111367 42018 out.go:179] - HTTP_PROXY=localhost:33459
W1213 08:39:45.119117 42018 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1213 08:39:45.124913 42018 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1213 08:39:45.128176 42018 ssh_runner.go:195] Run: cat /version.json
I1213 08:39:45.128245 42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 08:39:45.129162 42018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1213 08:39:45.129227 42018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-074420
I1213 08:39:45.160597 42018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
I1213 08:39:45.162212 42018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22128-2315/.minikube/machines/functional-074420/id_rsa Username:docker}
I1213 08:39:45.269246 42018 ssh_runner.go:195] Run: systemctl --version
I1213 08:39:45.365813 42018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1213 08:39:45.371968 42018 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1213 08:39:45.372044 42018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1213 08:39:45.398970 42018 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1213 08:39:45.398982 42018 start.go:496] detecting cgroup driver to use...
I1213 08:39:45.399023 42018 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1213 08:39:45.399069 42018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1213 08:39:45.414492 42018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1213 08:39:45.427187 42018 docker.go:218] disabling cri-docker service (if available) ...
I1213 08:39:45.427249 42018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1213 08:39:45.444774 42018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1213 08:39:45.465107 42018 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1213 08:39:45.584345 42018 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1213 08:39:45.711901 42018 docker.go:234] disabling docker service ...
I1213 08:39:45.711952 42018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1213 08:39:45.733252 42018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1213 08:39:45.746632 42018 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1213 08:39:45.862418 42018 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1213 08:39:45.989082 42018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1213 08:39:46.001863 42018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1213 08:39:46.020163 42018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1213 08:39:46.030149 42018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1213 08:39:46.043688 42018 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1213 08:39:46.043761 42018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1213 08:39:46.053238 42018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1213 08:39:46.061969 42018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1213 08:39:46.070460 42018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1213 08:39:46.079288 42018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1213 08:39:46.087279 42018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1213 08:39:46.095872 42018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1213 08:39:46.104667 42018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1213 08:39:46.113309 42018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1213 08:39:46.120534 42018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1213 08:39:46.127919 42018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 08:39:46.242734 42018 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1213 08:39:46.375439 42018 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1213 08:39:46.375503 42018 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1213 08:39:46.379347 42018 start.go:564] Will wait 60s for crictl version
I1213 08:39:46.379401 42018 ssh_runner.go:195] Run: which crictl
I1213 08:39:46.382863 42018 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1213 08:39:46.407695 42018 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1213 08:39:46.407775 42018 ssh_runner.go:195] Run: containerd --version
I1213 08:39:46.429202 42018 ssh_runner.go:195] Run: containerd --version
I1213 08:39:46.453205 42018 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
I1213 08:39:46.456067 42018 cli_runner.go:164] Run: docker network inspect functional-074420 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 08:39:46.473296 42018 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1213 08:39:46.477132 42018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1213 08:39:46.486681 42018 kubeadm.go:884] updating cluster {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1213 08:39:46.486784 42018 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1213 08:39:46.486847 42018 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 08:39:46.513672 42018 containerd.go:627] all images are preloaded for containerd runtime.
I1213 08:39:46.513684 42018 containerd.go:534] Images already preloaded, skipping extraction
I1213 08:39:46.513756 42018 ssh_runner.go:195] Run: sudo crictl images --output json
I1213 08:39:46.537482 42018 containerd.go:627] all images are preloaded for containerd runtime.
I1213 08:39:46.537493 42018 cache_images.go:86] Images are preloaded, skipping loading
I1213 08:39:46.537499 42018 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1213 08:39:46.537607 42018 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-074420 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1213 08:39:46.537669 42018 ssh_runner.go:195] Run: sudo crictl info
I1213 08:39:46.561711 42018 cni.go:84] Creating CNI manager for ""
I1213 08:39:46.561722 42018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1213 08:39:46.561735 42018 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1213 08:39:46.561756 42018 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-074420 NodeName:functional-074420 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1213 08:39:46.561864 42018 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-074420"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1213 08:39:46.561930 42018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1213 08:39:46.569762 42018 binaries.go:51] Found k8s binaries, skipping transfer
I1213 08:39:46.569819 42018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1213 08:39:46.577342 42018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1213 08:39:46.590349 42018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1213 08:39:46.603598 42018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1213 08:39:46.616603 42018 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1213 08:39:46.620266 42018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1213 08:39:46.630271 42018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1213 08:39:46.754305 42018 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1213 08:39:46.770188 42018 certs.go:69] Setting up /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420 for IP: 192.168.49.2
I1213 08:39:46.770198 42018 certs.go:195] generating shared ca certs ...
I1213 08:39:46.770212 42018 certs.go:227] acquiring lock for ca certs: {Name:mkc52718882f75e25e30325f9b7f673df2785cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:39:46.770344 42018 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key
I1213 08:39:46.770385 42018 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key
I1213 08:39:46.770391 42018 certs.go:257] generating profile certs ...
I1213 08:39:46.770451 42018 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key
I1213 08:39:46.770460 42018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt with IP's: []
I1213 08:39:47.026369 42018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt ...
I1213 08:39:47.026394 42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.crt: {Name:mkf94bf2e36ee2a82c3216cba6efa264a3df13aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:39:47.026604 42018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key ...
I1213 08:39:47.026611 42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/client.key: {Name:mkc6f3d57c62afe223b051632170572e08ab1587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:39:47.026707 42018 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068
I1213 08:39:47.026720 42018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt.971c8068 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1213 08:39:47.158042 42018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt.971c8068 ...
I1213 08:39:47.158057 42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt.971c8068: {Name:mkb27a52c7997e89ac0f18c5820641571e6e2856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:39:47.158249 42018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068 ...
I1213 08:39:47.158259 42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068: {Name:mk951bc88000f094f69ff3a51f592a8492883138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:39:47.158344 42018 certs.go:382] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt.971c8068 -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt
I1213 08:39:47.158420 42018 certs.go:386] copying /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key.971c8068 -> /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key
I1213 08:39:47.158472 42018 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key
I1213 08:39:47.158485 42018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt with IP's: []
I1213 08:39:47.250575 42018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt ...
I1213 08:39:47.250589 42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt: {Name:mka2c0137322e7e1ccf578821ae754fe9cb2d3a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:39:47.250769 42018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key ...
I1213 08:39:47.250776 42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key: {Name:mkc71cca0e53de1bfc7eed430ccb4047ca2b0852 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1213 08:39:47.250966 42018 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem (1338 bytes)
W1213 08:39:47.251005 42018 certs.go:480] ignoring /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120_empty.pem, impossibly tiny 0 bytes
I1213 08:39:47.251013 42018 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca-key.pem (1675 bytes)
I1213 08:39:47.251041 42018 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/ca.pem (1082 bytes)
I1213 08:39:47.251064 42018 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/cert.pem (1123 bytes)
I1213 08:39:47.251087 42018 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/certs/key.pem (1675 bytes)
I1213 08:39:47.251133 42018 certs.go:484] found cert: /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem (1708 bytes)
I1213 08:39:47.251784 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1213 08:39:47.270551 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1213 08:39:47.290740 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1213 08:39:47.310143 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1213 08:39:47.329665 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1213 08:39:47.347558 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1213 08:39:47.365525 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1213 08:39:47.383483 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/profiles/functional-074420/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1213 08:39:47.401153 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/certs/4120.pem --> /usr/share/ca-certificates/4120.pem (1338 bytes)
I1213 08:39:47.419849 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/files/etc/ssl/certs/41202.pem --> /usr/share/ca-certificates/41202.pem (1708 bytes)
I1213 08:39:47.439488 42018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22128-2315/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1213 08:39:47.460458 42018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1213 08:39:47.473321 42018 ssh_runner.go:195] Run: openssl version
I1213 08:39:47.479379 42018 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41202.pem
I1213 08:39:47.486769 42018 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41202.pem /etc/ssl/certs/41202.pem
I1213 08:39:47.494131 42018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41202.pem
I1213 08:39:47.497903 42018 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 13 08:39 /usr/share/ca-certificates/41202.pem
I1213 08:39:47.497957 42018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41202.pem
I1213 08:39:47.539850 42018 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1213 08:39:47.547972 42018 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41202.pem /etc/ssl/certs/3ec20f2e.0
I1213 08:39:47.556051 42018 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1213 08:39:47.564230 42018 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1213 08:39:47.572240 42018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1213 08:39:47.576460 42018 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 08:29 /usr/share/ca-certificates/minikubeCA.pem
I1213 08:39:47.576535 42018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1213 08:39:47.618080 42018 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1213 08:39:47.625373 42018 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1213 08:39:47.632911 42018 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4120.pem
I1213 08:39:47.640340 42018 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4120.pem /etc/ssl/certs/4120.pem
I1213 08:39:47.647879 42018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4120.pem
I1213 08:39:47.651656 42018 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 13 08:39 /usr/share/ca-certificates/4120.pem
I1213 08:39:47.651733 42018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4120.pem
I1213 08:39:47.692835 42018 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1213 08:39:47.701070 42018 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4120.pem /etc/ssl/certs/51391683.0
I1213 08:39:47.708764 42018 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1213 08:39:47.712327 42018 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1213 08:39:47.712367 42018 kubeadm.go:401] StartCluster: {Name:functional-074420 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-074420 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1213 08:39:47.712434 42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1213 08:39:47.712485 42018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1213 08:39:47.738096 42018 cri.go:89] found id: ""
I1213 08:39:47.738161 42018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1213 08:39:47.745866 42018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1213 08:39:47.753358 42018 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1213 08:39:47.753408 42018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1213 08:39:47.761299 42018 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1213 08:39:47.761308 42018 kubeadm.go:158] found existing configuration files:
I1213 08:39:47.761364 42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1213 08:39:47.768974 42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1213 08:39:47.769027 42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1213 08:39:47.776504 42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1213 08:39:47.783908 42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1213 08:39:47.783959 42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1213 08:39:47.791205 42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1213 08:39:47.798664 42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1213 08:39:47.798730 42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1213 08:39:47.805928 42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1213 08:39:47.813474 42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1213 08:39:47.813528 42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1213 08:39:47.820866 42018 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1213 08:39:47.952729 42018 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1213 08:39:47.953143 42018 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1213 08:39:48.022518 42018 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1213 08:43:51.217302 42018 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1213 08:43:51.217323 42018 kubeadm.go:319]
I1213 08:43:51.217395 42018 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1213 08:43:51.221041 42018 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1213 08:43:51.221132 42018 kubeadm.go:319] [preflight] Running pre-flight checks
I1213 08:43:51.221292 42018 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1213 08:43:51.221393 42018 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1213 08:43:51.221453 42018 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1213 08:43:51.221982 42018 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1213 08:43:51.222071 42018 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1213 08:43:51.222155 42018 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1213 08:43:51.222241 42018 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1213 08:43:51.222359 42018 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1213 08:43:51.222573 42018 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1213 08:43:51.222633 42018 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1213 08:43:51.222697 42018 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1213 08:43:51.222751 42018 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1213 08:43:51.222838 42018 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1213 08:43:51.222958 42018 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1213 08:43:51.223051 42018 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1213 08:43:51.223121 42018 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1213 08:43:51.225963 42018 out.go:252] - Generating certificates and keys ...
I1213 08:43:51.226053 42018 kubeadm.go:319] [certs] Using existing ca certificate authority
I1213 08:43:51.226132 42018 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1213 08:43:51.226223 42018 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1213 08:43:51.226292 42018 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1213 08:43:51.226358 42018 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1213 08:43:51.226412 42018 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1213 08:43:51.226473 42018 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1213 08:43:51.226597 42018 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-074420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1213 08:43:51.226648 42018 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1213 08:43:51.226789 42018 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-074420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1213 08:43:51.226853 42018 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1213 08:43:51.226917 42018 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1213 08:43:51.226962 42018 kubeadm.go:319] [certs] Generating "sa" key and public key
I1213 08:43:51.227034 42018 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1213 08:43:51.227085 42018 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1213 08:43:51.227141 42018 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1213 08:43:51.227199 42018 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1213 08:43:51.227272 42018 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1213 08:43:51.227333 42018 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1213 08:43:51.227430 42018 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1213 08:43:51.227504 42018 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1213 08:43:51.232281 42018 out.go:252] - Booting up control plane ...
I1213 08:43:51.232373 42018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1213 08:43:51.232481 42018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1213 08:43:51.232552 42018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1213 08:43:51.232655 42018 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1213 08:43:51.232766 42018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1213 08:43:51.232887 42018 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1213 08:43:51.232972 42018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1213 08:43:51.233010 42018 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1213 08:43:51.233146 42018 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1213 08:43:51.233256 42018 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1213 08:43:51.233328 42018 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001119929s
I1213 08:43:51.233331 42018 kubeadm.go:319]
I1213 08:43:51.233391 42018 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1213 08:43:51.233438 42018 kubeadm.go:319] - The kubelet is not running
I1213 08:43:51.233556 42018 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1213 08:43:51.233560 42018 kubeadm.go:319]
I1213 08:43:51.233663 42018 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1213 08:43:51.233694 42018 kubeadm.go:319] - 'systemctl status kubelet'
I1213 08:43:51.233738 42018 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1213 08:43:51.233801 42018 kubeadm.go:319]
W1213 08:43:51.233871 42018 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-074420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-074420 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001119929s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1213 08:43:51.233960 42018 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1213 08:43:51.644766 42018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1213 08:43:51.658140 42018 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1213 08:43:51.658191 42018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1213 08:43:51.666791 42018 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1213 08:43:51.666810 42018 kubeadm.go:158] found existing configuration files:
I1213 08:43:51.666861 42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1213 08:43:51.674638 42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1213 08:43:51.674701 42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1213 08:43:51.682186 42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1213 08:43:51.689889 42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1213 08:43:51.689948 42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1213 08:43:51.697487 42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1213 08:43:51.705045 42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1213 08:43:51.705100 42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1213 08:43:51.712252 42018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1213 08:43:51.719870 42018 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1213 08:43:51.719935 42018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1213 08:43:51.727359 42018 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1213 08:43:51.769563 42018 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1213 08:43:51.769610 42018 kubeadm.go:319] [preflight] Running pre-flight checks
I1213 08:43:51.835643 42018 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1213 08:43:51.835708 42018 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1213 08:43:51.835745 42018 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1213 08:43:51.835789 42018 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1213 08:43:51.835836 42018 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1213 08:43:51.835882 42018 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1213 08:43:51.835929 42018 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1213 08:43:51.835980 42018 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1213 08:43:51.836027 42018 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1213 08:43:51.836073 42018 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1213 08:43:51.836119 42018 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1213 08:43:51.836164 42018 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1213 08:43:51.905250 42018 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1213 08:43:51.905390 42018 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1213 08:43:51.905493 42018 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1213 08:43:51.911999 42018 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1213 08:43:51.915619 42018 out.go:252] - Generating certificates and keys ...
I1213 08:43:51.915706 42018 kubeadm.go:319] [certs] Using existing ca certificate authority
I1213 08:43:51.915765 42018 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1213 08:43:51.915837 42018 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1213 08:43:51.915894 42018 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1213 08:43:51.915959 42018 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1213 08:43:51.916230 42018 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1213 08:43:51.916310 42018 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1213 08:43:51.916557 42018 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1213 08:43:51.916629 42018 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1213 08:43:51.916875 42018 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1213 08:43:51.917066 42018 kubeadm.go:319] [certs] Using the existing "sa" key
I1213 08:43:51.917120 42018 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1213 08:43:52.072887 42018 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1213 08:43:52.306102 42018 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1213 08:43:52.396478 42018 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1213 08:43:52.909784 42018 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1213 08:43:53.263053 42018 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1213 08:43:53.263865 42018 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1213 08:43:53.266707 42018 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1213 08:43:53.270092 42018 out.go:252] - Booting up control plane ...
I1213 08:43:53.270214 42018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1213 08:43:53.270336 42018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1213 08:43:53.270436 42018 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1213 08:43:53.294198 42018 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1213 08:43:53.294312 42018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1213 08:43:53.301591 42018 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1213 08:43:53.301843 42018 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1213 08:43:53.301882 42018 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1213 08:43:53.435451 42018 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1213 08:43:53.435633 42018 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1213 08:47:53.436618 42018 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001208507s
I1213 08:47:53.436637 42018 kubeadm.go:319]
I1213 08:47:53.436693 42018 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1213 08:47:53.436725 42018 kubeadm.go:319] - The kubelet is not running
I1213 08:47:53.436829 42018 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1213 08:47:53.436832 42018 kubeadm.go:319]
I1213 08:47:53.436936 42018 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1213 08:47:53.436967 42018 kubeadm.go:319] - 'systemctl status kubelet'
I1213 08:47:53.436997 42018 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1213 08:47:53.437000 42018 kubeadm.go:319]
I1213 08:47:53.441253 42018 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1213 08:47:53.441671 42018 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1213 08:47:53.441782 42018 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1213 08:47:53.442020 42018 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1213 08:47:53.442028 42018 kubeadm.go:319]
I1213 08:47:53.442095 42018 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1213 08:47:53.442143 42018 kubeadm.go:403] duration metric: took 8m5.729777522s to StartCluster
I1213 08:47:53.442187 42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1213 08:47:53.442246 42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1213 08:47:53.467027 42018 cri.go:89] found id: ""
I1213 08:47:53.467050 42018 logs.go:282] 0 containers: []
W1213 08:47:53.467057 42018 logs.go:284] No container was found matching "kube-apiserver"
I1213 08:47:53.467062 42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1213 08:47:53.467124 42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1213 08:47:53.492575 42018 cri.go:89] found id: ""
I1213 08:47:53.492588 42018 logs.go:282] 0 containers: []
W1213 08:47:53.492599 42018 logs.go:284] No container was found matching "etcd"
I1213 08:47:53.492603 42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1213 08:47:53.492661 42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1213 08:47:53.516560 42018 cri.go:89] found id: ""
I1213 08:47:53.516574 42018 logs.go:282] 0 containers: []
W1213 08:47:53.516580 42018 logs.go:284] No container was found matching "coredns"
I1213 08:47:53.516585 42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1213 08:47:53.516640 42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1213 08:47:53.544885 42018 cri.go:89] found id: ""
I1213 08:47:53.544899 42018 logs.go:282] 0 containers: []
W1213 08:47:53.544905 42018 logs.go:284] No container was found matching "kube-scheduler"
I1213 08:47:53.544910 42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1213 08:47:53.544966 42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1213 08:47:53.571557 42018 cri.go:89] found id: ""
I1213 08:47:53.571570 42018 logs.go:282] 0 containers: []
W1213 08:47:53.571577 42018 logs.go:284] No container was found matching "kube-proxy"
I1213 08:47:53.571582 42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1213 08:47:53.571641 42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1213 08:47:53.597400 42018 cri.go:89] found id: ""
I1213 08:47:53.597414 42018 logs.go:282] 0 containers: []
W1213 08:47:53.597420 42018 logs.go:284] No container was found matching "kube-controller-manager"
I1213 08:47:53.597426 42018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1213 08:47:53.597482 42018 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1213 08:47:53.620080 42018 cri.go:89] found id: ""
I1213 08:47:53.620093 42018 logs.go:282] 0 containers: []
W1213 08:47:53.620099 42018 logs.go:284] No container was found matching "kindnet"
I1213 08:47:53.620107 42018 logs.go:123] Gathering logs for kubelet ...
I1213 08:47:53.620117 42018 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1213 08:47:53.676418 42018 logs.go:123] Gathering logs for dmesg ...
I1213 08:47:53.676436 42018 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1213 08:47:53.687053 42018 logs.go:123] Gathering logs for describe nodes ...
I1213 08:47:53.687068 42018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1213 08:47:53.750366 42018 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1213 08:47:53.741274 4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 08:47:53.742022 4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 08:47:53.743792 4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 08:47:53.744434 4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 08:47:53.745919 4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1213 08:47:53.741274 4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 08:47:53.742022 4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 08:47:53.743792 4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 08:47:53.744434 4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 08:47:53.745919 4761 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1213 08:47:53.750377 42018 logs.go:123] Gathering logs for containerd ...
I1213 08:47:53.750387 42018 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1213 08:47:53.790702 42018 logs.go:123] Gathering logs for container status ...
I1213 08:47:53.790722 42018 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1213 08:47:53.821228 42018 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001208507s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1213 08:47:53.821260 42018 out.go:285] *
W1213 08:47:53.821320 42018 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001208507s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1213 08:47:53.821338 42018 out.go:285] *
W1213 08:47:53.823454 42018 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1213 08:47:53.829350 42018 out.go:203]
W1213 08:47:53.833068 42018 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001208507s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1213 08:47:53.833383 42018 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1213 08:47:53.833466 42018 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1213 08:47:53.838401 42018 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.319755842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.319823584Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.319919946Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.319982305Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320015446Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320031405Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320041777Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320055791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320074958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320113260Z" level=info msg="Connect containerd service"
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.320426706Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.321024610Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.333892514Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.334078049Z" level=info msg="Start subscribing containerd event"
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.334228073Z" level=info msg="Start recovering state"
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.334161964Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.372946246Z" level=info msg="Start event monitor"
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.373143261Z" level=info msg="Start cni network conf syncer for default"
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.373208427Z" level=info msg="Start streaming server"
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.373276907Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.373335952Z" level=info msg="runtime interface starting up..."
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.373390885Z" level=info msg="starting plugins..."
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.373456141Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 13 08:39:46 functional-074420 systemd[1]: Started containerd.service - containerd container runtime.
Dec 13 08:39:46 functional-074420 containerd[768]: time="2025-12-13T08:39:46.375241885Z" level=info msg="containerd successfully booted in 0.081295s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1213 08:47:54.798023 4884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 08:47:54.798564 4884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 08:47:54.800057 4884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 08:47:54.800486 4884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1213 08:47:54.801944 4884 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec13 08:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014993] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.510221] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.035255] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.809232] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.400796] kauditd_printk_skb: 36 callbacks suppressed
==> kernel <==
08:47:54 up 30 min, 0 user, load average: 0.47, 0.54, 0.65
Linux functional-074420 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 13 08:47:51 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 08:47:52 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 13 08:47:52 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 08:47:52 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 08:47:52 functional-074420 kubelet[4686]: E1213 08:47:52.322071 4686 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 13 08:47:52 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 08:47:52 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 08:47:53 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 13 08:47:53 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 08:47:53 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 08:47:53 functional-074420 kubelet[4692]: E1213 08:47:53.068125 4692 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 13 08:47:53 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 08:47:53 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 08:47:53 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 13 08:47:53 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 08:47:53 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 08:47:53 functional-074420 kubelet[4768]: E1213 08:47:53.843780 4768 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 13 08:47:53 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 08:47:53 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 08:47:54 functional-074420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 13 08:47:54 functional-074420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 08:47:54 functional-074420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 13 08:47:54 functional-074420 kubelet[4819]: E1213 08:47:54.593626 4819 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 13 08:47:54 functional-074420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 08:47:54 functional-074420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-074420 -n functional-074420: exit status 6 (399.858737ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1213 08:47:55.311702 47709 status.go:458] kubeconfig endpoint: get endpoint: "functional-074420" does not appear in /home/jenkins/minikube-integration/22128-2315/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-074420" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (500.37s)