=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1208 00:24:30.128022 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:24:57.838888 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.226924 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.233617 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.245349 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.266807 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.308284 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.389827 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.551415 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:28.873216 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:29.515248 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:30.797641 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:33.359181 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:38.481531 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:26:48.723789 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:09.205163 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:27:50.167024 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:29:12.089552 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-932121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 00:29:30.128468 846711 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/addons-011456/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m19.075832909s)
-- stdout --
* [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22054
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-386544" primary control-plane node in "functional-386544" cluster
* Pulling base image v0.0.48-1764843390-22032 ...
* Found network options:
- HTTP_PROXY=localhost:34883
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:34883 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-386544 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-386544 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000117256s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000226772s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000226772s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-386544
helpers_test.go:243: (dbg) docker inspect functional-386544:
-- stdout --
[
{
"Id": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
"Created": "2025-12-08T00:22:27.490172837Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 885529,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-08T00:22:27.576231077Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
"ResolvConfPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hostname",
"HostsPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/hosts",
"LogPath": "/var/lib/docker/containers/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053/fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053-json.log",
"Name": "/functional-386544",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-386544:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-386544",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "fc0795925cb5ac4d03b4d5339ec1b342e43f8ecdc6db1890af3c70c623460053",
"LowerDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517-init/diff:/var/lib/docker/overlay2/40b294e00072f0af5326fafe1c61a1dc4e4e1c6eeed1278d3ffab539c8eefa35/diff",
"MergedDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/merged",
"UpperDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/diff",
"WorkDir": "/var/lib/docker/overlay2/4e3dd2e23788acd4984c3f1b6ea02b0868bb2b5fa163968b1903864c29791517/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-386544",
"Source": "/var/lib/docker/volumes/functional-386544/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-386544",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-386544",
"name.minikube.sigs.k8s.io": "functional-386544",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "ab47bf74a4e07662f74e805ad531ed900dc01dbc84c517f73ffad176d15f56f3",
"SandboxKey": "/var/run/docker/netns/ab47bf74a4e0",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33558"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33559"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33562"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33560"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33561"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-386544": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "72:66:b2:62:5b:25",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "06b1cc3553c03246538eaca032321ef38d808b052cf04c64aae8da597b4ffbb1",
"EndpointID": "1c471ebdc0f01809df2b633405d4e3a5ff0d85802cc369ab8d5fe570bed61153",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-386544",
"fc0795925cb5"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-386544 -n functional-386544: exit status 6 (337.347472ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1208 00:30:41.892374 890638 status.go:458] kubeconfig endpoint: get endpoint: "functional-386544" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p functional-386544 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ image │ functional-932121 image ls │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ ssh │ functional-932121 ssh sudo cat /usr/share/ca-certificates/8467112.pem │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ ssh │ functional-932121 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image ls │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ update-context │ functional-932121 update-context --alsologtostderr -v=2 │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ update-context │ functional-932121 update-context --alsologtostderr -v=2 │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image load --daemon kicbase/echo-server:functional-932121 --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ update-context │ functional-932121 update-context --alsologtostderr -v=2 │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image ls │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image save kicbase/echo-server:functional-932121 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image rm kicbase/echo-server:functional-932121 --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image ls │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image ls │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image save --daemon kicbase/echo-server:functional-932121 --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image ls --format short --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image ls --format yaml --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image ls --format json --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image ls --format table --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ ssh │ functional-932121 ssh pgrep buildkitd │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ │
│ image │ functional-932121 image build -t localhost/my-image:functional-932121 testdata/build --alsologtostderr │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ image │ functional-932121 image ls │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ delete │ -p functional-932121 │ functional-932121 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ 08 Dec 25 00:22 UTC │
│ start │ -p functional-386544 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-386544 │ jenkins │ v1.37.0 │ 08 Dec 25 00:22 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/08 00:22:22
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.25.3 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1208 00:22:22.512090 885129 out.go:360] Setting OutFile to fd 1 ...
I1208 00:22:22.512193 885129 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:22:22.512197 885129 out.go:374] Setting ErrFile to fd 2...
I1208 00:22:22.512201 885129 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 00:22:22.512439 885129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22054-843440/.minikube/bin
I1208 00:22:22.512841 885129 out.go:368] Setting JSON to false
I1208 00:22:22.513726 885129 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18295,"bootTime":1765135047,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1208 00:22:22.513785 885129 start.go:143] virtualization:
I1208 00:22:22.518377 885129 out.go:179] * [functional-386544] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1208 00:22:22.522245 885129 out.go:179] - MINIKUBE_LOCATION=22054
I1208 00:22:22.522325 885129 notify.go:221] Checking for updates...
I1208 00:22:22.526559 885129 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1208 00:22:22.529989 885129 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22054-843440/kubeconfig
I1208 00:22:22.533381 885129 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22054-843440/.minikube
I1208 00:22:22.536715 885129 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1208 00:22:22.540088 885129 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1208 00:22:22.543444 885129 driver.go:422] Setting default libvirt URI to qemu:///system
I1208 00:22:22.580975 885129 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1208 00:22:22.581104 885129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1208 00:22:22.640454 885129 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-08 00:22:22.631219957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1208 00:22:22.640548 885129 docker.go:319] overlay module found
I1208 00:22:22.643908 885129 out.go:179] * Using the docker driver based on user configuration
I1208 00:22:22.647008 885129 start.go:309] selected driver: docker
I1208 00:22:22.647016 885129 start.go:927] validating driver "docker" against <nil>
I1208 00:22:22.647028 885129 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1208 00:22:22.647733 885129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1208 00:22:22.710733 885129 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-08 00:22:22.700399589 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1208 00:22:22.710888 885129 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1208 00:22:22.711107 885129 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1208 00:22:22.714074 885129 out.go:179] * Using Docker driver with root privileges
I1208 00:22:22.717095 885129 cni.go:84] Creating CNI manager for ""
I1208 00:22:22.717151 885129 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1208 00:22:22.717158 885129 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1208 00:22:22.717231 885129 start.go:353] cluster config:
{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1208 00:22:22.722520 885129 out.go:179] * Starting "functional-386544" primary control-plane node in "functional-386544" cluster
I1208 00:22:22.725484 885129 cache.go:134] Beginning downloading kic base image for docker with containerd
I1208 00:22:22.728490 885129 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
I1208 00:22:22.731402 885129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1208 00:22:22.731445 885129 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
I1208 00:22:22.731445 885129 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
I1208 00:22:22.731453 885129 cache.go:65] Caching tarball of preloaded images
I1208 00:22:22.731551 885129 preload.go:238] Found /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1208 00:22:22.731560 885129 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
I1208 00:22:22.731899 885129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
I1208 00:22:22.731917 885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json: {Name:mkc1cab28ef3e474ac0a5249c6807f96abc9927d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 00:22:22.751608 885129 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
I1208 00:22:22.751620 885129 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
I1208 00:22:22.751633 885129 cache.go:243] Successfully downloaded all kic artifacts
I1208 00:22:22.751664 885129 start.go:360] acquireMachinesLock for functional-386544: {Name:mk3c41cde5d73d10efa09ccb9028ddf1bd6ae9b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1208 00:22:22.751771 885129 start.go:364] duration metric: took 92.169µs to acquireMachinesLock for "functional-386544"
I1208 00:22:22.751795 885129 start.go:93] Provisioning new machine with config: &{Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1208 00:22:22.751875 885129 start.go:125] createHost starting for "" (driver="docker")
I1208 00:22:22.757220 885129 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1208 00:22:22.757505 885129 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:34883 to docker env.
I1208 00:22:22.757534 885129 start.go:159] libmachine.API.Create for "functional-386544" (driver="docker")
I1208 00:22:22.757555 885129 client.go:173] LocalClient.Create starting
I1208 00:22:22.757617 885129 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem
I1208 00:22:22.757652 885129 main.go:143] libmachine: Decoding PEM data...
I1208 00:22:22.757683 885129 main.go:143] libmachine: Parsing certificate...
I1208 00:22:22.757744 885129 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem
I1208 00:22:22.757761 885129 main.go:143] libmachine: Decoding PEM data...
I1208 00:22:22.757772 885129 main.go:143] libmachine: Parsing certificate...
I1208 00:22:22.758161 885129 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1208 00:22:22.777609 885129 cli_runner.go:211] docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1208 00:22:22.777708 885129 network_create.go:284] running [docker network inspect functional-386544] to gather additional debugging logs...
I1208 00:22:22.777725 885129 cli_runner.go:164] Run: docker network inspect functional-386544
W1208 00:22:22.802644 885129 cli_runner.go:211] docker network inspect functional-386544 returned with exit code 1
I1208 00:22:22.802670 885129 network_create.go:287] error running [docker network inspect functional-386544]: docker network inspect functional-386544: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-386544 not found
I1208 00:22:22.802691 885129 network_create.go:289] output of [docker network inspect functional-386544]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-386544 not found
** /stderr **
I1208 00:22:22.802885 885129 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1208 00:22:22.822400 885129 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001878f50}
I1208 00:22:22.822440 885129 network_create.go:124] attempt to create docker network functional-386544 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1208 00:22:22.822528 885129 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-386544 functional-386544
I1208 00:22:22.883900 885129 network_create.go:108] docker network functional-386544 192.168.49.0/24 created
I1208 00:22:22.883923 885129 kic.go:121] calculated static IP "192.168.49.2" for the "functional-386544" container
I1208 00:22:22.884002 885129 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1208 00:22:22.898723 885129 cli_runner.go:164] Run: docker volume create functional-386544 --label name.minikube.sigs.k8s.io=functional-386544 --label created_by.minikube.sigs.k8s.io=true
I1208 00:22:22.917792 885129 oci.go:103] Successfully created a docker volume functional-386544
I1208 00:22:22.917869 885129 cli_runner.go:164] Run: docker run --rm --name functional-386544-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-386544 --entrypoint /usr/bin/test -v functional-386544:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
I1208 00:22:23.451691 885129 oci.go:107] Successfully prepared a docker volume functional-386544
I1208 00:22:23.451760 885129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1208 00:22:23.451768 885129 kic.go:194] Starting extracting preloaded images to volume ...
I1208 00:22:23.451853 885129 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-386544:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
I1208 00:22:27.407008 885129 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22054-843440/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-386544:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.955118511s)
I1208 00:22:27.407028 885129 kic.go:203] duration metric: took 3.955257351s to extract preloaded images to volume ...
W1208 00:22:27.407182 885129 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1208 00:22:27.407279 885129 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1208 00:22:27.475291 885129 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-386544 --name functional-386544 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-386544 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-386544 --network functional-386544 --ip 192.168.49.2 --volume functional-386544:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
I1208 00:22:27.785554 885129 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Running}}
I1208 00:22:27.810912 885129 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
I1208 00:22:27.836609 885129 cli_runner.go:164] Run: docker exec functional-386544 stat /var/lib/dpkg/alternatives/iptables
I1208 00:22:27.890159 885129 oci.go:144] the created container "functional-386544" has a running status.
I1208 00:22:27.890180 885129 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa...
I1208 00:22:28.001631 885129 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1208 00:22:28.030968 885129 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
I1208 00:22:28.058416 885129 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1208 00:22:28.058428 885129 kic_runner.go:114] Args: [docker exec --privileged functional-386544 chown docker:docker /home/docker/.ssh/authorized_keys]
I1208 00:22:28.128624 885129 cli_runner.go:164] Run: docker container inspect functional-386544 --format={{.State.Status}}
I1208 00:22:28.160295 885129 machine.go:94] provisionDockerMachine start ...
I1208 00:22:28.160392 885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:22:28.186554 885129 main.go:143] libmachine: Using SSH client type: native
I1208 00:22:28.186889 885129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 33558 <nil> <nil>}
I1208 00:22:28.186897 885129 main.go:143] libmachine: About to run SSH command:
hostname
I1208 00:22:28.187495 885129 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1208 00:22:31.338268 885129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
I1208 00:22:31.338282 885129 ubuntu.go:182] provisioning hostname "functional-386544"
I1208 00:22:31.338348 885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:22:31.360260 885129 main.go:143] libmachine: Using SSH client type: native
I1208 00:22:31.360575 885129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 33558 <nil> <nil>}
I1208 00:22:31.360583 885129 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-386544 && echo "functional-386544" | sudo tee /etc/hostname
I1208 00:22:31.520531 885129 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-386544
I1208 00:22:31.520617 885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:22:31.539097 885129 main.go:143] libmachine: Using SSH client type: native
I1208 00:22:31.539403 885129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 33558 <nil> <nil>}
I1208 00:22:31.539420 885129 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-386544' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-386544/g' /etc/hosts;
else
echo '127.0.1.1 functional-386544' | sudo tee -a /etc/hosts;
fi
fi
I1208 00:22:31.690612 885129 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1208 00:22:31.690627 885129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22054-843440/.minikube CaCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22054-843440/.minikube}
I1208 00:22:31.690657 885129 ubuntu.go:190] setting up certificates
I1208 00:22:31.690666 885129 provision.go:84] configureAuth start
I1208 00:22:31.690725 885129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
I1208 00:22:31.707701 885129 provision.go:143] copyHostCerts
I1208 00:22:31.707761 885129 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem, removing ...
I1208 00:22:31.707769 885129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem
I1208 00:22:31.707851 885129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/ca.pem (1078 bytes)
I1208 00:22:31.707950 885129 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem, removing ...
I1208 00:22:31.707953 885129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem
I1208 00:22:31.707979 885129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/cert.pem (1123 bytes)
I1208 00:22:31.708040 885129 exec_runner.go:144] found /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem, removing ...
I1208 00:22:31.708044 885129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem
I1208 00:22:31.708066 885129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22054-843440/.minikube/key.pem (1679 bytes)
I1208 00:22:31.708116 885129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem org=jenkins.functional-386544 san=[127.0.0.1 192.168.49.2 functional-386544 localhost minikube]
I1208 00:22:31.993694 885129 provision.go:177] copyRemoteCerts
I1208 00:22:31.993751 885129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1208 00:22:31.993797 885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:22:32.013965 885129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
I1208 00:22:32.122251 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1208 00:22:32.139425 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1208 00:22:32.156636 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1208 00:22:32.174174 885129 provision.go:87] duration metric: took 483.486089ms to configureAuth
I1208 00:22:32.174192 885129 ubuntu.go:206] setting minikube options for container-runtime
I1208 00:22:32.174382 885129 config.go:182] Loaded profile config "functional-386544": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1208 00:22:32.174389 885129 machine.go:97] duration metric: took 4.01408375s to provisionDockerMachine
I1208 00:22:32.174394 885129 client.go:176] duration metric: took 9.416835024s to LocalClient.Create
I1208 00:22:32.174407 885129 start.go:167] duration metric: took 9.416876272s to libmachine.API.Create "functional-386544"
I1208 00:22:32.174412 885129 start.go:293] postStartSetup for "functional-386544" (driver="docker")
I1208 00:22:32.174421 885129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1208 00:22:32.174562 885129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1208 00:22:32.174599 885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:22:32.192357 885129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
I1208 00:22:32.298517 885129 ssh_runner.go:195] Run: cat /etc/os-release
I1208 00:22:32.301738 885129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1208 00:22:32.301755 885129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1208 00:22:32.301765 885129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/addons for local assets ...
I1208 00:22:32.301823 885129 filesync.go:126] Scanning /home/jenkins/minikube-integration/22054-843440/.minikube/files for local assets ...
I1208 00:22:32.301915 885129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem -> 8467112.pem in /etc/ssl/certs
I1208 00:22:32.301990 885129 filesync.go:149] local asset: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts -> hosts in /etc/test/nested/copy/846711
I1208 00:22:32.302043 885129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/846711
I1208 00:22:32.309721 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /etc/ssl/certs/8467112.pem (1708 bytes)
I1208 00:22:32.327056 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/test/nested/copy/846711/hosts --> /etc/test/nested/copy/846711/hosts (40 bytes)
I1208 00:22:32.345724 885129 start.go:296] duration metric: took 171.297297ms for postStartSetup
I1208 00:22:32.346100 885129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
I1208 00:22:32.363671 885129 profile.go:143] Saving config to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/config.json ...
I1208 00:22:32.363950 885129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1208 00:22:32.363989 885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:22:32.381065 885129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
I1208 00:22:32.483704 885129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1208 00:22:32.488672 885129 start.go:128] duration metric: took 9.736783107s to createHost
I1208 00:22:32.488687 885129 start.go:83] releasing machines lock for "functional-386544", held for 9.736909623s
I1208 00:22:32.488766 885129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-386544
I1208 00:22:32.509799 885129 out.go:179] * Found network options:
I1208 00:22:32.512729 885129 out.go:179] - HTTP_PROXY=localhost:34883
W1208 00:22:32.515752 885129 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1208 00:22:32.518635 885129 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1208 00:22:32.521423 885129 ssh_runner.go:195] Run: cat /version.json
I1208 00:22:32.521468 885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:22:32.521498 885129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1208 00:22:32.521550 885129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-386544
I1208 00:22:32.539803 885129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
I1208 00:22:32.548653 885129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/22054-843440/.minikube/machines/functional-386544/id_rsa Username:docker}
I1208 00:22:32.646126 885129 ssh_runner.go:195] Run: systemctl --version
I1208 00:22:32.745008 885129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1208 00:22:32.749524 885129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1208 00:22:32.749605 885129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1208 00:22:32.777737 885129 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1208 00:22:32.777751 885129 start.go:496] detecting cgroup driver to use...
I1208 00:22:32.777786 885129 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1208 00:22:32.777842 885129 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1208 00:22:32.792673 885129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1208 00:22:32.805545 885129 docker.go:218] disabling cri-docker service (if available) ...
I1208 00:22:32.805597 885129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1208 00:22:32.823501 885129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1208 00:22:32.842234 885129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1208 00:22:32.953988 885129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1208 00:22:33.084770 885129 docker.go:234] disabling docker service ...
I1208 00:22:33.084824 885129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1208 00:22:33.106566 885129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1208 00:22:33.120427 885129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1208 00:22:33.245224 885129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1208 00:22:33.368146 885129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1208 00:22:33.382202 885129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1208 00:22:33.398161 885129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1208 00:22:33.408136 885129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1208 00:22:33.417496 885129 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1208 00:22:33.417558 885129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1208 00:22:33.426603 885129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1208 00:22:33.436212 885129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1208 00:22:33.445084 885129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1208 00:22:33.454235 885129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1208 00:22:33.462539 885129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1208 00:22:33.472121 885129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1208 00:22:33.481022 885129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1208 00:22:33.490531 885129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1208 00:22:33.498399 885129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1208 00:22:33.506396 885129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1208 00:22:33.626744 885129 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1208 00:22:33.752269 885129 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1208 00:22:33.752359 885129 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1208 00:22:33.757435 885129 start.go:564] Will wait 60s for crictl version
I1208 00:22:33.757492 885129 ssh_runner.go:195] Run: which crictl
I1208 00:22:33.761189 885129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1208 00:22:33.786619 885129 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1208 00:22:33.786696 885129 ssh_runner.go:195] Run: containerd --version
I1208 00:22:33.809236 885129 ssh_runner.go:195] Run: containerd --version
I1208 00:22:33.834001 885129 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
I1208 00:22:33.837045 885129 cli_runner.go:164] Run: docker network inspect functional-386544 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1208 00:22:33.854269 885129 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1208 00:22:33.858150 885129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1208 00:22:33.867900 885129 kubeadm.go:884] updating cluster {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1208 00:22:33.868010 885129 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1208 00:22:33.868073 885129 ssh_runner.go:195] Run: sudo crictl images --output json
I1208 00:22:33.892051 885129 containerd.go:627] all images are preloaded for containerd runtime.
I1208 00:22:33.892062 885129 containerd.go:534] Images already preloaded, skipping extraction
I1208 00:22:33.892119 885129 ssh_runner.go:195] Run: sudo crictl images --output json
I1208 00:22:33.920620 885129 containerd.go:627] all images are preloaded for containerd runtime.
I1208 00:22:33.920632 885129 cache_images.go:86] Images are preloaded, skipping loading
I1208 00:22:33.920638 885129 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1208 00:22:33.920725 885129 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-386544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1208 00:22:33.920786 885129 ssh_runner.go:195] Run: sudo crictl info
I1208 00:22:33.945962 885129 cni.go:84] Creating CNI manager for ""
I1208 00:22:33.945972 885129 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1208 00:22:33.946002 885129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1208 00:22:33.946030 885129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-386544 NodeName:functional-386544 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1208 00:22:33.946163 885129 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-386544"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1208 00:22:33.946243 885129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1208 00:22:33.954233 885129 binaries.go:51] Found k8s binaries, skipping transfer
I1208 00:22:33.954293 885129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1208 00:22:33.962121 885129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1208 00:22:33.975135 885129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1208 00:22:33.988515 885129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1208 00:22:34.002079 885129 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1208 00:22:34.009170 885129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1208 00:22:34.019711 885129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1208 00:22:34.131982 885129 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1208 00:22:34.149574 885129 certs.go:69] Setting up /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544 for IP: 192.168.49.2
I1208 00:22:34.149584 885129 certs.go:195] generating shared ca certs ...
I1208 00:22:34.149598 885129 certs.go:227] acquiring lock for ca certs: {Name:mke8f8dfc0490e70fb515a33067cb95745b736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 00:22:34.149768 885129 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key
I1208 00:22:34.149818 885129 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key
I1208 00:22:34.149824 885129 certs.go:257] generating profile certs ...
I1208 00:22:34.149880 885129 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key
I1208 00:22:34.149890 885129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt with IP's: []
I1208 00:22:34.554511 885129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt ...
I1208 00:22:34.554533 885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.crt: {Name:mk8c4f8c2202b6c32ae112dd78671007ae8aced1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 00:22:34.554743 885129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key ...
I1208 00:22:34.554750 885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/client.key: {Name:mk192376a5f948ac55d5b18453d878914eefadd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 00:22:34.554844 885129 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf
I1208 00:22:34.554855 885129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt.5e9334cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1208 00:22:34.804631 885129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt.5e9334cf ...
I1208 00:22:34.804646 885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt.5e9334cf: {Name:mk7540ce13d5fd8467ffa88e2f59a8c37e04dfcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 00:22:34.804832 885129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf ...
I1208 00:22:34.804839 885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf: {Name:mk0107f76d167d52676a2f955fcc6e93af70104d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 00:22:34.804924 885129 certs.go:382] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt.5e9334cf -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt
I1208 00:22:34.804998 885129 certs.go:386] copying /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key.5e9334cf -> /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key
I1208 00:22:34.805049 885129 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key
I1208 00:22:34.805060 885129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt with IP's: []
I1208 00:22:35.007292 885129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt ...
I1208 00:22:35.007310 885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt: {Name:mk7784073660c6af0850dd3f80c2a68d59de8031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 00:22:35.007540 885129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key ...
I1208 00:22:35.007548 885129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key: {Name:mk8c989a08ec86e3c644d91c8ebe42e8b21d0beb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1208 00:22:35.007752 885129 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem (1338 bytes)
W1208 00:22:35.007798 885129 certs.go:480] ignoring /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711_empty.pem, impossibly tiny 0 bytes
I1208 00:22:35.007806 885129 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca-key.pem (1675 bytes)
I1208 00:22:35.007840 885129 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/ca.pem (1078 bytes)
I1208 00:22:35.007865 885129 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/cert.pem (1123 bytes)
I1208 00:22:35.007891 885129 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/certs/key.pem (1679 bytes)
I1208 00:22:35.007935 885129 certs.go:484] found cert: /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem (1708 bytes)
I1208 00:22:35.008553 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1208 00:22:35.029445 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1208 00:22:35.048467 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1208 00:22:35.066330 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1208 00:22:35.085182 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1208 00:22:35.104337 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1208 00:22:35.122614 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1208 00:22:35.141515 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/profiles/functional-386544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1208 00:22:35.160538 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/files/etc/ssl/certs/8467112.pem --> /usr/share/ca-certificates/8467112.pem (1708 bytes)
I1208 00:22:35.178815 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1208 00:22:35.197195 885129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22054-843440/.minikube/certs/846711.pem --> /usr/share/ca-certificates/846711.pem (1338 bytes)
I1208 00:22:35.215958 885129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1208 00:22:35.229251 885129 ssh_runner.go:195] Run: openssl version
I1208 00:22:35.236108 885129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8467112.pem
I1208 00:22:35.244050 885129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8467112.pem /etc/ssl/certs/8467112.pem
I1208 00:22:35.251957 885129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8467112.pem
I1208 00:22:35.255851 885129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 8 00:22 /usr/share/ca-certificates/8467112.pem
I1208 00:22:35.255911 885129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8467112.pem
I1208 00:22:35.297245 885129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1208 00:22:35.304788 885129 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8467112.pem /etc/ssl/certs/3ec20f2e.0
I1208 00:22:35.312032 885129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1208 00:22:35.319323 885129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1208 00:22:35.326671 885129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1208 00:22:35.330413 885129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 8 00:12 /usr/share/ca-certificates/minikubeCA.pem
I1208 00:22:35.330531 885129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1208 00:22:35.371864 885129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1208 00:22:35.379473 885129 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1208 00:22:35.387024 885129 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/846711.pem
I1208 00:22:35.394299 885129 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/846711.pem /etc/ssl/certs/846711.pem
I1208 00:22:35.401854 885129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/846711.pem
I1208 00:22:35.405481 885129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 8 00:22 /usr/share/ca-certificates/846711.pem
I1208 00:22:35.405537 885129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/846711.pem
I1208 00:22:35.446307 885129 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1208 00:22:35.454437 885129 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/846711.pem /etc/ssl/certs/51391683.0
I1208 00:22:35.462077 885129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1208 00:22:35.465776 885129 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1208 00:22:35.465820 885129 kubeadm.go:401] StartCluster: {Name:functional-386544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-386544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1208 00:22:35.465906 885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1208 00:22:35.465981 885129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1208 00:22:35.496583 885129 cri.go:89] found id: ""
I1208 00:22:35.496646 885129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1208 00:22:35.504659 885129 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1208 00:22:35.512654 885129 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1208 00:22:35.512714 885129 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1208 00:22:35.520589 885129 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1208 00:22:35.520620 885129 kubeadm.go:158] found existing configuration files:
I1208 00:22:35.520675 885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1208 00:22:35.528549 885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1208 00:22:35.528605 885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1208 00:22:35.536361 885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1208 00:22:35.544141 885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1208 00:22:35.544208 885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1208 00:22:35.551783 885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1208 00:22:35.560017 885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1208 00:22:35.560083 885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1208 00:22:35.567705 885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1208 00:22:35.575391 885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1208 00:22:35.575448 885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1208 00:22:35.582984 885129 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1208 00:22:35.642032 885129 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1208 00:22:35.642123 885129 kubeadm.go:319] [preflight] Running pre-flight checks
I1208 00:22:35.716738 885129 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1208 00:22:35.716806 885129 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1208 00:22:35.716840 885129 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1208 00:22:35.716883 885129 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1208 00:22:35.716930 885129 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1208 00:22:35.716976 885129 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1208 00:22:35.717022 885129 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1208 00:22:35.717070 885129 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1208 00:22:35.717129 885129 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1208 00:22:35.717175 885129 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1208 00:22:35.717222 885129 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1208 00:22:35.717267 885129 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1208 00:22:35.783782 885129 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1208 00:22:35.783921 885129 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1208 00:22:35.784041 885129 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1208 00:22:35.789795 885129 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1208 00:22:35.796171 885129 out.go:252] - Generating certificates and keys ...
I1208 00:22:35.796272 885129 kubeadm.go:319] [certs] Using existing ca certificate authority
I1208 00:22:35.796349 885129 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1208 00:22:35.861882 885129 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1208 00:22:36.050370 885129 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1208 00:22:36.241762 885129 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1208 00:22:36.401587 885129 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1208 00:22:36.714587 885129 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1208 00:22:36.714889 885129 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-386544 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1208 00:22:36.815487 885129 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1208 00:22:36.815889 885129 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-386544 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1208 00:22:36.879703 885129 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1208 00:22:37.144293 885129 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1208 00:22:37.362810 885129 kubeadm.go:319] [certs] Generating "sa" key and public key
I1208 00:22:37.362959 885129 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1208 00:22:37.552787 885129 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1208 00:22:37.809527 885129 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1208 00:22:38.042851 885129 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1208 00:22:38.705792 885129 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1208 00:22:38.801202 885129 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1208 00:22:38.802097 885129 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1208 00:22:38.804908 885129 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1208 00:22:38.808308 885129 out.go:252] - Booting up control plane ...
I1208 00:22:38.808409 885129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1208 00:22:38.808781 885129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1208 00:22:38.810329 885129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1208 00:22:38.827643 885129 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1208 00:22:38.827920 885129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1208 00:22:38.835521 885129 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1208 00:22:38.835816 885129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1208 00:22:38.835990 885129 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1208 00:22:38.982514 885129 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1208 00:22:38.982625 885129 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1208 00:26:38.974998 885129 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000117256s
I1208 00:26:38.975027 885129 kubeadm.go:319]
I1208 00:26:38.975129 885129 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1208 00:26:38.975238 885129 kubeadm.go:319] - The kubelet is not running
I1208 00:26:38.975572 885129 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1208 00:26:38.975579 885129 kubeadm.go:319]
I1208 00:26:38.975769 885129 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1208 00:26:38.975825 885129 kubeadm.go:319] - 'systemctl status kubelet'
I1208 00:26:38.975985 885129 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1208 00:26:38.975992 885129 kubeadm.go:319]
I1208 00:26:38.980887 885129 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1208 00:26:38.981423 885129 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1208 00:26:38.981538 885129 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1208 00:26:38.981823 885129 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1208 00:26:38.981827 885129 kubeadm.go:319]
I1208 00:26:38.981900 885129 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1208 00:26:38.982018 885129 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-386544 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-386544 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000117256s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I1208 00:26:38.982112 885129 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1208 00:26:39.396654 885129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1208 00:26:39.410542 885129 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1208 00:26:39.410599 885129 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1208 00:26:39.418641 885129 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1208 00:26:39.418651 885129 kubeadm.go:158] found existing configuration files:
I1208 00:26:39.418702 885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1208 00:26:39.426662 885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1208 00:26:39.426718 885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1208 00:26:39.434100 885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1208 00:26:39.442294 885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1208 00:26:39.442359 885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1208 00:26:39.450019 885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1208 00:26:39.458090 885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1208 00:26:39.458153 885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1208 00:26:39.465940 885129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1208 00:26:39.474176 885129 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1208 00:26:39.474237 885129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1208 00:26:39.481812 885129 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1208 00:26:39.523064 885129 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1208 00:26:39.523398 885129 kubeadm.go:319] [preflight] Running pre-flight checks
I1208 00:26:39.602239 885129 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1208 00:26:39.602300 885129 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1208 00:26:39.602333 885129 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1208 00:26:39.602374 885129 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1208 00:26:39.602419 885129 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1208 00:26:39.602481 885129 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1208 00:26:39.602526 885129 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1208 00:26:39.602570 885129 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1208 00:26:39.602614 885129 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1208 00:26:39.602656 885129 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1208 00:26:39.602701 885129 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1208 00:26:39.602743 885129 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1208 00:26:39.665982 885129 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1208 00:26:39.666105 885129 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1208 00:26:39.666220 885129 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1208 00:26:39.674987 885129 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1208 00:26:39.678542 885129 out.go:252] - Generating certificates and keys ...
I1208 00:26:39.678631 885129 kubeadm.go:319] [certs] Using existing ca certificate authority
I1208 00:26:39.678695 885129 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1208 00:26:39.678771 885129 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1208 00:26:39.678832 885129 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1208 00:26:39.678901 885129 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1208 00:26:39.678955 885129 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1208 00:26:39.679017 885129 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1208 00:26:39.679081 885129 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1208 00:26:39.679157 885129 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1208 00:26:39.679229 885129 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1208 00:26:39.679266 885129 kubeadm.go:319] [certs] Using the existing "sa" key
I1208 00:26:39.679322 885129 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1208 00:26:39.823877 885129 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1208 00:26:39.913534 885129 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1208 00:26:40.267910 885129 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1208 00:26:40.705717 885129 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1208 00:26:40.910523 885129 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1208 00:26:40.911323 885129 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1208 00:26:40.914133 885129 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1208 00:26:40.917460 885129 out.go:252] - Booting up control plane ...
I1208 00:26:40.917574 885129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1208 00:26:40.917674 885129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1208 00:26:40.919234 885129 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1208 00:26:40.940979 885129 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1208 00:26:40.941259 885129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1208 00:26:40.948778 885129 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1208 00:26:40.949041 885129 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1208 00:26:40.949081 885129 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1208 00:26:41.087781 885129 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1208 00:26:41.087900 885129 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1208 00:30:41.087554 885129 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000226772s
I1208 00:30:41.087575 885129 kubeadm.go:319]
I1208 00:30:41.087631 885129 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1208 00:30:41.087664 885129 kubeadm.go:319] - The kubelet is not running
I1208 00:30:41.087801 885129 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1208 00:30:41.087813 885129 kubeadm.go:319]
I1208 00:30:41.087917 885129 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1208 00:30:41.087949 885129 kubeadm.go:319] - 'systemctl status kubelet'
I1208 00:30:41.087979 885129 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1208 00:30:41.087982 885129 kubeadm.go:319]
I1208 00:30:41.092295 885129 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1208 00:30:41.092709 885129 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1208 00:30:41.092818 885129 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1208 00:30:41.093053 885129 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1208 00:30:41.093058 885129 kubeadm.go:319]
I1208 00:30:41.093124 885129 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1208 00:30:41.093177 885129 kubeadm.go:403] duration metric: took 8m5.627362264s to StartCluster
I1208 00:30:41.093211 885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1208 00:30:41.093285 885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1208 00:30:41.130255 885129 cri.go:89] found id: ""
I1208 00:30:41.130269 885129 logs.go:282] 0 containers: []
W1208 00:30:41.130276 885129 logs.go:284] No container was found matching "kube-apiserver"
I1208 00:30:41.130281 885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1208 00:30:41.130357 885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1208 00:30:41.163078 885129 cri.go:89] found id: ""
I1208 00:30:41.163093 885129 logs.go:282] 0 containers: []
W1208 00:30:41.163100 885129 logs.go:284] No container was found matching "etcd"
I1208 00:30:41.163105 885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1208 00:30:41.163165 885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1208 00:30:41.190236 885129 cri.go:89] found id: ""
I1208 00:30:41.190250 885129 logs.go:282] 0 containers: []
W1208 00:30:41.190257 885129 logs.go:284] No container was found matching "coredns"
I1208 00:30:41.190262 885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1208 00:30:41.190323 885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1208 00:30:41.217596 885129 cri.go:89] found id: ""
I1208 00:30:41.217610 885129 logs.go:282] 0 containers: []
W1208 00:30:41.217618 885129 logs.go:284] No container was found matching "kube-scheduler"
I1208 00:30:41.217623 885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1208 00:30:41.217686 885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1208 00:30:41.245271 885129 cri.go:89] found id: ""
I1208 00:30:41.245285 885129 logs.go:282] 0 containers: []
W1208 00:30:41.245292 885129 logs.go:284] No container was found matching "kube-proxy"
I1208 00:30:41.245298 885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1208 00:30:41.245359 885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1208 00:30:41.270742 885129 cri.go:89] found id: ""
I1208 00:30:41.270760 885129 logs.go:282] 0 containers: []
W1208 00:30:41.270768 885129 logs.go:284] No container was found matching "kube-controller-manager"
I1208 00:30:41.270773 885129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1208 00:30:41.270839 885129 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1208 00:30:41.300287 885129 cri.go:89] found id: ""
I1208 00:30:41.300302 885129 logs.go:282] 0 containers: []
W1208 00:30:41.300310 885129 logs.go:284] No container was found matching "kindnet"
I1208 00:30:41.300318 885129 logs.go:123] Gathering logs for containerd ...
I1208 00:30:41.300327 885129 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1208 00:30:41.339554 885129 logs.go:123] Gathering logs for container status ...
I1208 00:30:41.339575 885129 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1208 00:30:41.375527 885129 logs.go:123] Gathering logs for kubelet ...
I1208 00:30:41.375543 885129 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1208 00:30:41.432450 885129 logs.go:123] Gathering logs for dmesg ...
I1208 00:30:41.432469 885129 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1208 00:30:41.447916 885129 logs.go:123] Gathering logs for describe nodes ...
I1208 00:30:41.447932 885129 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1208 00:30:41.515619 885129 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1208 00:30:41.506869 4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1208 00:30:41.507467 4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1208 00:30:41.509078 4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1208 00:30:41.509516 4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1208 00:30:41.511082 4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1208 00:30:41.506869 4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1208 00:30:41.507467 4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1208 00:30:41.509078 4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1208 00:30:41.509516 4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1208 00:30:41.511082 4799 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
W1208 00:30:41.515633 885129 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000226772s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1208 00:30:41.515663 885129 out.go:285] *
W1208 00:30:41.515738 885129 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000226772s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1208 00:30:41.515765 885129 out.go:285] *
W1208 00:30:41.517900 885129 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1208 00:30:41.524942 885129 out.go:203]
W1208 00:30:41.527855 885129 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000226772s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1208 00:30:41.527904 885129 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1208 00:30:41.527923 885129 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1208 00:30:41.531137 885129 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691456285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691521500Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691639794Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691729928Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691793453Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691859209Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691924539Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.691984897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.692058899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.692159011Z" level=info msg="Connect containerd service"
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.692536040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.693237421Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.703828259Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.704048470Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.703965622Z" level=info msg="Start subscribing containerd event"
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.708322970Z" level=info msg="Start recovering state"
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749034666Z" level=info msg="Start event monitor"
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749218758Z" level=info msg="Start cni network conf syncer for default"
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749302303Z" level=info msg="Start streaming server"
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749373844Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749437295Z" level=info msg="runtime interface starting up..."
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749492483Z" level=info msg="starting plugins..."
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.749559028Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 08 00:22:33 functional-386544 systemd[1]: Started containerd.service - containerd container runtime.
Dec 08 00:22:33 functional-386544 containerd[762]: time="2025-12-08T00:22:33.751989055Z" level=info msg="containerd successfully booted in 0.083175s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1208 00:30:42.569164 4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1208 00:30:42.569955 4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1208 00:30:42.571679 4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1208 00:30:42.572058 4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1208 00:30:42.573541 4906 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec 7 22:18] systemd-journald[225]: Failed to send stream file descriptor to service manager: Connection refused
[Dec 8 00:11] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
00:30:42 up 5:13, 0 user, load average: 0.33, 0.75, 1.46
Linux functional-386544 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 08 00:30:39 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 08 00:30:39 functional-386544 kubelet[4708]: E1208 00:30:39.634748 4708 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 08 00:30:39 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 08 00:30:39 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 08 00:30:40 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 08 00:30:40 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 08 00:30:40 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 08 00:30:40 functional-386544 kubelet[4714]: E1208 00:30:40.388360 4714 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 08 00:30:40 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 08 00:30:40 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 08 00:30:41 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 08 00:30:41 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 08 00:30:41 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 08 00:30:41 functional-386544 kubelet[4719]: E1208 00:30:41.156972 4719 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 08 00:30:41 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 08 00:30:41 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 08 00:30:41 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 08 00:30:41 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 08 00:30:41 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 08 00:30:41 functional-386544 kubelet[4816]: E1208 00:30:41.882898 4816 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 08 00:30:41 functional-386544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 08 00:30:41 functional-386544 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 08 00:30:42 functional-386544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 08 00:30:42 functional-386544 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 08 00:30:42 functional-386544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-386544 -n functional-386544: exit status 6 (342.836058ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1208 00:30:43.036873 890864 status.go:458] kubeconfig endpoint: get endpoint: "functional-386544" does not appear in /home/jenkins/minikube-integration/22054-843440/kubeconfig
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-386544" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (500.59s)