=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1216 02:43:51.135343 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:44:18.850734 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.139507 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.146097 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.157661 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.179226 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.220741 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.302229 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.463824 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:51.785517 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:52.427686 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:53.709383 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:45:56.270762 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:46:01.392642 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:46:11.634624 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:46:32.116053 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:47:13.078553 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:48:35.001603 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-853651/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:48:51.139336 1798370 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/addons-870019/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.63202031s)
-- stdout --
* [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22158
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-389759" primary control-plane node in "functional-389759" cluster
* Pulling base image v0.0.48-1765575274-22117 ...
* Found network options:
- HTTP_PROXY=localhost:41589
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:41589 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-389759 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-389759 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001007216s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001643344s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001643344s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-389759
helpers_test.go:244: (dbg) docker inspect functional-389759:
-- stdout --
[
{
"Id": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
"Created": "2025-12-16T02:41:46.85492681Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1837192,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-16T02:41:46.915844066Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c84ca27951472b9c4a9ed85a27c99cbe96a939682ff6a02c57a032f53538f774",
"ResolvConfPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hostname",
"HostsPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/hosts",
"LogPath": "/var/lib/docker/containers/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7/23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7-json.log",
"Name": "/functional-389759",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-389759:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-389759",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "23b85b27a161549803b21af5e7ffb28db2ea58833eb13847ddd14932326baff7",
"LowerDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188-init/diff:/var/lib/docker/overlay2/7abbdba2b9841229906485e8acdc433ea00737d7b3f5bc5edd5d6c02f7da0a36/diff",
"MergedDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/merged",
"UpperDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/diff",
"WorkDir": "/var/lib/docker/overlay2/1c773f2fc46424886a6c2263518a88fb6f947a0a341643f10f61060c9be74188/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-389759",
"Source": "/var/lib/docker/volumes/functional-389759/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-389759",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-389759",
"name.minikube.sigs.k8s.io": "functional-389759",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "4aef1987aa1b800e31e70051024768a7513d9a9f8c674d2a96d04661e0bec70e",
"SandboxKey": "/var/run/docker/netns/4aef1987aa1b",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34354"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34355"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34358"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34356"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34357"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-389759": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "56:6a:fd:73:00:f6",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "feeda5df1ffe811e491c4bbf4db3cbd953bc8b7a0aa2027e551ea5d70d3923d5",
"EndpointID": "adf0a87f19266958641771b082babcea3009b918ea91b332fc09b5936085c2a9",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-389759",
"23b85b27a161"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-389759 -n functional-389759: exit status 6 (328.40053ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1216 02:50:02.792324 1842311 status.go:458] kubeconfig endpoint: get endpoint: "functional-389759" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-389759 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ mount │ -p functional-853651 --kill=true │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ │
│ addons │ functional-853651 addons list │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ addons │ functional-853651 addons list -o json │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ service │ functional-853651 service hello-node-connect --url │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ start │ -p functional-853651 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=containerd │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ │
│ start │ -p functional-853651 --dry-run --alsologtostderr -v=1 --driver=docker --container-runtime=containerd │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ │
│ start │ -p functional-853651 --dry-run --memory 250MB --alsologtostderr --driver=docker --container-runtime=containerd │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ │
│ dashboard │ --url --port 36195 -p functional-853651 --alsologtostderr -v=1 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ service │ functional-853651 service list │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ service │ functional-853651 service list -o json │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ service │ functional-853651 service --namespace=default --https --url hello-node │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ service │ functional-853651 service hello-node --url --format={{.IP}} │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ service │ functional-853651 service hello-node --url │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ image │ functional-853651 image ls --format short --alsologtostderr │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ image │ functional-853651 image ls --format yaml --alsologtostderr │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ ssh │ functional-853651 ssh pgrep buildkitd │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ │
│ image │ functional-853651 image ls --format json --alsologtostderr │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ image │ functional-853651 image build -t localhost/my-image:functional-853651 testdata/build --alsologtostderr │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ image │ functional-853651 image ls --format table --alsologtostderr │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ update-context │ functional-853651 update-context --alsologtostderr -v=2 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ update-context │ functional-853651 update-context --alsologtostderr -v=2 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ update-context │ functional-853651 update-context --alsologtostderr -v=2 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ image │ functional-853651 image ls │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ delete │ -p functional-853651 │ functional-853651 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ 16 Dec 25 02:41 UTC │
│ start │ -p functional-389759 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-389759 │ jenkins │ v1.37.0 │ 16 Dec 25 02:41 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/16 02:41:41
Running on machine: ip-172-31-29-130
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1216 02:41:41.863776 1836804 out.go:360] Setting OutFile to fd 1 ...
I1216 02:41:41.863878 1836804 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:41:41.863882 1836804 out.go:374] Setting ErrFile to fd 2...
I1216 02:41:41.863892 1836804 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:41:41.864246 1836804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-1796512/.minikube/bin
I1216 02:41:41.864730 1836804 out.go:368] Setting JSON to false
I1216 02:41:41.865580 1836804 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":30246,"bootTime":1765822656,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
I1216 02:41:41.865669 1836804 start.go:143] virtualization:
I1216 02:41:41.870933 1836804 out.go:179] * [functional-389759] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1216 02:41:41.875573 1836804 out.go:179] - MINIKUBE_LOCATION=22158
I1216 02:41:41.875738 1836804 notify.go:221] Checking for updates...
I1216 02:41:41.882559 1836804 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1216 02:41:41.885814 1836804 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22158-1796512/kubeconfig
I1216 02:41:41.889016 1836804 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-1796512/.minikube
I1216 02:41:41.892027 1836804 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1216 02:41:41.895080 1836804 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1216 02:41:41.898309 1836804 driver.go:422] Setting default libvirt URI to qemu:///system
I1216 02:41:41.929920 1836804 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1216 02:41:41.930030 1836804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1216 02:41:42.007624 1836804 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-16 02:41:41.995689401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1216 02:41:42.007729 1836804 docker.go:319] overlay module found
I1216 02:41:42.015251 1836804 out.go:179] * Using the docker driver based on user configuration
I1216 02:41:42.020637 1836804 start.go:309] selected driver: docker
I1216 02:41:42.020680 1836804 start.go:927] validating driver "docker" against <nil>
I1216 02:41:42.020694 1836804 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1216 02:41:42.021642 1836804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1216 02:41:42.112719 1836804 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-16 02:41:42.086328211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1216 02:41:42.112960 1836804 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1216 02:41:42.113241 1836804 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1216 02:41:42.116434 1836804 out.go:179] * Using Docker driver with root privileges
I1216 02:41:42.119601 1836804 cni.go:84] Creating CNI manager for ""
I1216 02:41:42.119675 1836804 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1216 02:41:42.119687 1836804 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1216 02:41:42.119781 1836804 start.go:353] cluster config:
{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1216 02:41:42.123394 1836804 out.go:179] * Starting "functional-389759" primary control-plane node in "functional-389759" cluster
I1216 02:41:42.128174 1836804 cache.go:134] Beginning downloading kic base image for docker with containerd
I1216 02:41:42.131522 1836804 out.go:179] * Pulling base image v0.0.48-1765575274-22117 ...
I1216 02:41:42.134670 1836804 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1216 02:41:42.134730 1836804 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
I1216 02:41:42.134741 1836804 cache.go:65] Caching tarball of preloaded images
I1216 02:41:42.134766 1836804 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon
I1216 02:41:42.134852 1836804 preload.go:238] Found /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1216 02:41:42.134861 1836804 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
I1216 02:41:42.135286 1836804 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
I1216 02:41:42.135320 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json: {Name:mk1bd7f7413370999f48167ac4c3dbb5d6b00856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:41:42.159600 1836804 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb in local docker daemon, skipping pull
I1216 02:41:42.159614 1836804 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb exists in daemon, skipping load
I1216 02:41:42.159636 1836804 cache.go:243] Successfully downloaded all kic artifacts
I1216 02:41:42.159674 1836804 start.go:360] acquireMachinesLock for functional-389759: {Name:mk3e5ab49157bd15c3c44767733b5ee4719660f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 02:41:42.159813 1836804 start.go:364] duration metric: took 123.984µs to acquireMachinesLock for "functional-389759"
I1216 02:41:42.159842 1836804 start.go:93] Provisioning new machine with config: &{Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1216 02:41:42.159922 1836804 start.go:125] createHost starting for "" (driver="docker")
I1216 02:41:42.163617 1836804 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1216 02:41:42.164034 1836804 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:41589 to docker env.
I1216 02:41:42.164064 1836804 start.go:159] libmachine.API.Create for "functional-389759" (driver="docker")
I1216 02:41:42.164089 1836804 client.go:173] LocalClient.Create starting
I1216 02:41:42.164165 1836804 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem
I1216 02:41:42.164219 1836804 main.go:143] libmachine: Decoding PEM data...
I1216 02:41:42.164235 1836804 main.go:143] libmachine: Parsing certificate...
I1216 02:41:42.164293 1836804 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem
I1216 02:41:42.164315 1836804 main.go:143] libmachine: Decoding PEM data...
I1216 02:41:42.164327 1836804 main.go:143] libmachine: Parsing certificate...
I1216 02:41:42.164825 1836804 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1216 02:41:42.190398 1836804 cli_runner.go:211] docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1216 02:41:42.190480 1836804 network_create.go:284] running [docker network inspect functional-389759] to gather additional debugging logs...
I1216 02:41:42.190497 1836804 cli_runner.go:164] Run: docker network inspect functional-389759
W1216 02:41:42.216864 1836804 cli_runner.go:211] docker network inspect functional-389759 returned with exit code 1
I1216 02:41:42.216889 1836804 network_create.go:287] error running [docker network inspect functional-389759]: docker network inspect functional-389759: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-389759 not found
I1216 02:41:42.216907 1836804 network_create.go:289] output of [docker network inspect functional-389759]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-389759 not found
** /stderr **
I1216 02:41:42.217010 1836804 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 02:41:42.259175 1836804 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018c1660}
I1216 02:41:42.259210 1836804 network_create.go:124] attempt to create docker network functional-389759 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1216 02:41:42.259272 1836804 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-389759 functional-389759
I1216 02:41:42.329083 1836804 network_create.go:108] docker network functional-389759 192.168.49.0/24 created
I1216 02:41:42.329108 1836804 kic.go:121] calculated static IP "192.168.49.2" for the "functional-389759" container
I1216 02:41:42.329190 1836804 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1216 02:41:42.347605 1836804 cli_runner.go:164] Run: docker volume create functional-389759 --label name.minikube.sigs.k8s.io=functional-389759 --label created_by.minikube.sigs.k8s.io=true
I1216 02:41:42.366361 1836804 oci.go:103] Successfully created a docker volume functional-389759
I1216 02:41:42.366448 1836804 cli_runner.go:164] Run: docker run --rm --name functional-389759-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-389759 --entrypoint /usr/bin/test -v functional-389759:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -d /var/lib
I1216 02:41:42.894590 1836804 oci.go:107] Successfully prepared a docker volume functional-389759
I1216 02:41:42.894666 1836804 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1216 02:41:42.894675 1836804 kic.go:194] Starting extracting preloaded images to volume ...
I1216 02:41:42.894754 1836804 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-389759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir
I1216 02:41:46.770975 1836804 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22158-1796512/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-389759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb -I lz4 -xf /preloaded.tar -C /extractDir: (3.876187627s)
I1216 02:41:46.770997 1836804 kic.go:203] duration metric: took 3.87631852s to extract preloaded images to volume ...
W1216 02:41:46.771179 1836804 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1216 02:41:46.771280 1836804 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1216 02:41:46.840011 1836804 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-389759 --name functional-389759 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-389759 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-389759 --network functional-389759 --ip 192.168.49.2 --volume functional-389759:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb
I1216 02:41:47.118824 1836804 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Running}}
I1216 02:41:47.140960 1836804 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
I1216 02:41:47.165207 1836804 cli_runner.go:164] Run: docker exec functional-389759 stat /var/lib/dpkg/alternatives/iptables
I1216 02:41:47.219862 1836804 oci.go:144] the created container "functional-389759" has a running status.
I1216 02:41:47.219881 1836804 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa...
I1216 02:41:47.371815 1836804 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1216 02:41:47.403286 1836804 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
I1216 02:41:47.435796 1836804 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1216 02:41:47.435807 1836804 kic_runner.go:114] Args: [docker exec --privileged functional-389759 chown docker:docker /home/docker/.ssh/authorized_keys]
I1216 02:41:47.486509 1836804 cli_runner.go:164] Run: docker container inspect functional-389759 --format={{.State.Status}}
I1216 02:41:47.514796 1836804 machine.go:94] provisionDockerMachine start ...
I1216 02:41:47.514885 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 02:41:47.545129 1836804 main.go:143] libmachine: Using SSH client type: native
I1216 02:41:47.545484 1836804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 34354 <nil> <nil>}
I1216 02:41:47.545491 1836804 main.go:143] libmachine: About to run SSH command:
hostname
I1216 02:41:47.546330 1836804 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1216 02:41:50.682534 1836804 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
I1216 02:41:50.682550 1836804 ubuntu.go:182] provisioning hostname "functional-389759"
I1216 02:41:50.682612 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 02:41:50.700243 1836804 main.go:143] libmachine: Using SSH client type: native
I1216 02:41:50.700547 1836804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 34354 <nil> <nil>}
I1216 02:41:50.700556 1836804 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-389759 && echo "functional-389759" | sudo tee /etc/hostname
I1216 02:41:50.844487 1836804 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-389759
I1216 02:41:50.844577 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 02:41:50.863332 1836804 main.go:143] libmachine: Using SSH client type: native
I1216 02:41:50.863640 1836804 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 34354 <nil> <nil>}
I1216 02:41:50.863655 1836804 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-389759' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-389759/g' /etc/hosts;
else
echo '127.0.1.1 functional-389759' | sudo tee -a /etc/hosts;
fi
fi
I1216 02:41:50.995262 1836804 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1216 02:41:50.995280 1836804 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22158-1796512/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-1796512/.minikube}
I1216 02:41:50.995306 1836804 ubuntu.go:190] setting up certificates
I1216 02:41:50.995314 1836804 provision.go:84] configureAuth start
I1216 02:41:50.995381 1836804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
I1216 02:41:51.014044 1836804 provision.go:143] copyHostCerts
I1216 02:41:51.014121 1836804 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem, removing ...
I1216 02:41:51.014129 1836804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem
I1216 02:41:51.014234 1836804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.pem (1082 bytes)
I1216 02:41:51.014334 1836804 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem, removing ...
I1216 02:41:51.014344 1836804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem
I1216 02:41:51.014373 1836804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/cert.pem (1123 bytes)
I1216 02:41:51.014460 1836804 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem, removing ...
I1216 02:41:51.014463 1836804 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem
I1216 02:41:51.014489 1836804 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-1796512/.minikube/key.pem (1675 bytes)
I1216 02:41:51.014549 1836804 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem org=jenkins.functional-389759 san=[127.0.0.1 192.168.49.2 functional-389759 localhost minikube]
I1216 02:41:51.199556 1836804 provision.go:177] copyRemoteCerts
I1216 02:41:51.199620 1836804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1216 02:41:51.199658 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 02:41:51.218305 1836804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
I1216 02:41:51.315016 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1216 02:41:51.332716 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1216 02:41:51.350176 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1216 02:41:51.367601 1836804 provision.go:87] duration metric: took 372.263595ms to configureAuth
I1216 02:41:51.367619 1836804 ubuntu.go:206] setting minikube options for container-runtime
I1216 02:41:51.367812 1836804 config.go:182] Loaded profile config "functional-389759": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1216 02:41:51.367819 1836804 machine.go:97] duration metric: took 3.853013625s to provisionDockerMachine
I1216 02:41:51.367825 1836804 client.go:176] duration metric: took 9.203730986s to LocalClient.Create
I1216 02:41:51.367850 1836804 start.go:167] duration metric: took 9.203786132s to libmachine.API.Create "functional-389759"
I1216 02:41:51.367856 1836804 start.go:293] postStartSetup for "functional-389759" (driver="docker")
I1216 02:41:51.367865 1836804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1216 02:41:51.367913 1836804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1216 02:41:51.367950 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 02:41:51.384750 1836804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
I1216 02:41:51.483063 1836804 ssh_runner.go:195] Run: cat /etc/os-release
I1216 02:41:51.486278 1836804 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1216 02:41:51.486296 1836804 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1216 02:41:51.486306 1836804 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/addons for local assets ...
I1216 02:41:51.486362 1836804 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-1796512/.minikube/files for local assets ...
I1216 02:41:51.486447 1836804 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem -> 17983702.pem in /etc/ssl/certs
I1216 02:41:51.486519 1836804 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts -> hosts in /etc/test/nested/copy/1798370
I1216 02:41:51.486564 1836804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1798370
I1216 02:41:51.494035 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /etc/ssl/certs/17983702.pem (1708 bytes)
I1216 02:41:51.511101 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/test/nested/copy/1798370/hosts --> /etc/test/nested/copy/1798370/hosts (40 bytes)
I1216 02:41:51.528790 1836804 start.go:296] duration metric: took 160.921152ms for postStartSetup
I1216 02:41:51.529182 1836804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
I1216 02:41:51.545938 1836804 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/config.json ...
I1216 02:41:51.546198 1836804 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1216 02:41:51.546243 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 02:41:51.564062 1836804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
I1216 02:41:51.660130 1836804 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1216 02:41:51.665460 1836804 start.go:128] duration metric: took 9.505524518s to createHost
I1216 02:41:51.665475 1836804 start.go:83] releasing machines lock for "functional-389759", held for 9.505654214s
I1216 02:41:51.665567 1836804 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-389759
I1216 02:41:51.688175 1836804 out.go:179] * Found network options:
I1216 02:41:51.691642 1836804 out.go:179] - HTTP_PROXY=localhost:41589
W1216 02:41:51.694158 1836804 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1216 02:41:51.696868 1836804 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1216 02:41:51.699712 1836804 ssh_runner.go:195] Run: cat /version.json
I1216 02:41:51.699757 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 02:41:51.699789 1836804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1216 02:41:51.699840 1836804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389759
I1216 02:41:51.719512 1836804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
I1216 02:41:51.725781 1836804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/22158-1796512/.minikube/machines/functional-389759/id_rsa Username:docker}
I1216 02:41:51.810634 1836804 ssh_runner.go:195] Run: systemctl --version
I1216 02:41:51.903750 1836804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1216 02:41:51.908055 1836804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1216 02:41:51.908116 1836804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1216 02:41:51.934468 1836804 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1216 02:41:51.934483 1836804 start.go:496] detecting cgroup driver to use...
I1216 02:41:51.934528 1836804 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1216 02:41:51.934584 1836804 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1216 02:41:51.950645 1836804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1216 02:41:51.964217 1836804 docker.go:218] disabling cri-docker service (if available) ...
I1216 02:41:51.964281 1836804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1216 02:41:51.981749 1836804 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1216 02:41:52.012353 1836804 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1216 02:41:52.134449 1836804 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1216 02:41:52.251164 1836804 docker.go:234] disabling docker service ...
I1216 02:41:52.251219 1836804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1216 02:41:52.274340 1836804 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1216 02:41:52.287362 1836804 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1216 02:41:52.417205 1836804 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1216 02:41:52.528413 1836804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1216 02:41:52.541544 1836804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1216 02:41:52.559036 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1216 02:41:52.569170 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1216 02:41:52.578061 1836804 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1216 02:41:52.578120 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1216 02:41:52.587182 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1216 02:41:52.596380 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1216 02:41:52.605092 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1216 02:41:52.614156 1836804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1216 02:41:52.622599 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1216 02:41:52.631794 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1216 02:41:52.641242 1836804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1216 02:41:52.650022 1836804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1216 02:41:52.657994 1836804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1216 02:41:52.665666 1836804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1216 02:41:52.776744 1836804 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1216 02:41:52.911490 1836804 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1216 02:41:52.911568 1836804 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1216 02:41:52.915572 1836804 start.go:564] Will wait 60s for crictl version
I1216 02:41:52.915629 1836804 ssh_runner.go:195] Run: which crictl
I1216 02:41:52.919425 1836804 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1216 02:41:52.943918 1836804 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1216 02:41:52.943990 1836804 ssh_runner.go:195] Run: containerd --version
I1216 02:41:52.965713 1836804 ssh_runner.go:195] Run: containerd --version
I1216 02:41:52.992536 1836804 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
I1216 02:41:52.995355 1836804 cli_runner.go:164] Run: docker network inspect functional-389759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 02:41:53.013451 1836804 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1216 02:41:53.017661 1836804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1216 02:41:53.028221 1836804 kubeadm.go:884] updating cluster {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1216 02:41:53.028325 1836804 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1216 02:41:53.028400 1836804 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 02:41:53.059210 1836804 containerd.go:627] all images are preloaded for containerd runtime.
I1216 02:41:53.059222 1836804 containerd.go:534] Images already preloaded, skipping extraction
I1216 02:41:53.059290 1836804 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 02:41:53.084175 1836804 containerd.go:627] all images are preloaded for containerd runtime.
I1216 02:41:53.084188 1836804 cache_images.go:86] Images are preloaded, skipping loading
I1216 02:41:53.084194 1836804 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1216 02:41:53.084284 1836804 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-389759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1216 02:41:53.084349 1836804 ssh_runner.go:195] Run: sudo crictl info
I1216 02:41:53.111406 1836804 cni.go:84] Creating CNI manager for ""
I1216 02:41:53.111416 1836804 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1216 02:41:53.111437 1836804 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1216 02:41:53.111465 1836804 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-389759 NodeName:functional-389759 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1216 02:41:53.111571 1836804 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-389759"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1216 02:41:53.111648 1836804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1216 02:41:53.119796 1836804 binaries.go:51] Found k8s binaries, skipping transfer
I1216 02:41:53.119859 1836804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1216 02:41:53.127945 1836804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1216 02:41:53.141403 1836804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1216 02:41:53.154427 1836804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1216 02:41:53.167712 1836804 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1216 02:41:53.171523 1836804 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1216 02:41:53.181750 1836804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1216 02:41:53.292618 1836804 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1216 02:41:53.309917 1836804 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759 for IP: 192.168.49.2
I1216 02:41:53.309928 1836804 certs.go:195] generating shared ca certs ...
I1216 02:41:53.309942 1836804 certs.go:227] acquiring lock for ca certs: {Name:mk605b098708818a8764b65ddcce21cc1906d812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:41:53.310112 1836804 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key
I1216 02:41:53.310150 1836804 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key
I1216 02:41:53.310156 1836804 certs.go:257] generating profile certs ...
I1216 02:41:53.310215 1836804 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key
I1216 02:41:53.310224 1836804 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt with IP's: []
I1216 02:41:53.385071 1836804 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt ...
I1216 02:41:53.385087 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.crt: {Name:mkf5b2afa4ff09d31576d440e6ade56ed6d1cf8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:41:53.385286 1836804 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key ...
I1216 02:41:53.385293 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/client.key: {Name:mk4ae7b1c21386e7b6110d43ab0dc54099526423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:41:53.385368 1836804 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84
I1216 02:41:53.385379 1836804 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt.a3e65e84 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1216 02:41:53.459433 1836804 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt.a3e65e84 ...
I1216 02:41:53.459461 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt.a3e65e84: {Name:mk7156a057fd9faaf2272c7a798fc39457ca3ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:41:53.460009 1836804 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84 ...
I1216 02:41:53.460019 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84: {Name:mkddc74ebf3e5006e4368dcb8c617f202b5feb32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:41:53.460100 1836804 certs.go:382] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt.a3e65e84 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt
I1216 02:41:53.460189 1836804 certs.go:386] copying /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key.a3e65e84 -> /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key
I1216 02:41:53.460243 1836804 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key
I1216 02:41:53.460256 1836804 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt with IP's: []
I1216 02:41:53.922797 1836804 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt ...
I1216 02:41:53.922814 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt: {Name:mkeba81ebdc3f059e5b4e04c8bed296e88d0b249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:41:53.923020 1836804 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key ...
I1216 02:41:53.923030 1836804 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key: {Name:mk3fab92e43e0e11ba066d8d273ef0f6e251a40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 02:41:53.923245 1836804 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem (1338 bytes)
W1216 02:41:53.923292 1836804 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370_empty.pem, impossibly tiny 0 bytes
I1216 02:41:53.923301 1836804 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca-key.pem (1675 bytes)
I1216 02:41:53.923327 1836804 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/ca.pem (1082 bytes)
I1216 02:41:53.923351 1836804 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/cert.pem (1123 bytes)
I1216 02:41:53.923374 1836804 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/key.pem (1675 bytes)
I1216 02:41:53.923417 1836804 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem (1708 bytes)
I1216 02:41:53.924017 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1216 02:41:53.942778 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1216 02:41:53.962164 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1216 02:41:53.979895 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1216 02:41:53.997572 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1216 02:41:54.021468 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1216 02:41:54.040597 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1216 02:41:54.059184 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/profiles/functional-389759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1216 02:41:54.077415 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/files/etc/ssl/certs/17983702.pem --> /usr/share/ca-certificates/17983702.pem (1708 bytes)
I1216 02:41:54.095933 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1216 02:41:54.115132 1836804 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-1796512/.minikube/certs/1798370.pem --> /usr/share/ca-certificates/1798370.pem (1338 bytes)
I1216 02:41:54.133518 1836804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1216 02:41:54.146928 1836804 ssh_runner.go:195] Run: openssl version
I1216 02:41:54.153599 1836804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17983702.pem
I1216 02:41:54.161652 1836804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17983702.pem /etc/ssl/certs/17983702.pem
I1216 02:41:54.169616 1836804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17983702.pem
I1216 02:41:54.173413 1836804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:41 /usr/share/ca-certificates/17983702.pem
I1216 02:41:54.173472 1836804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17983702.pem
I1216 02:41:54.214943 1836804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1216 02:41:54.222600 1836804 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17983702.pem /etc/ssl/certs/3ec20f2e.0
I1216 02:41:54.230149 1836804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1216 02:41:54.237714 1836804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1216 02:41:54.245518 1836804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1216 02:41:54.249850 1836804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:31 /usr/share/ca-certificates/minikubeCA.pem
I1216 02:41:54.249908 1836804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1216 02:41:54.291579 1836804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1216 02:41:54.298986 1836804 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1216 02:41:54.306619 1836804 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1798370.pem
I1216 02:41:54.314174 1836804 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1798370.pem /etc/ssl/certs/1798370.pem
I1216 02:41:54.321951 1836804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1798370.pem
I1216 02:41:54.325995 1836804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:41 /usr/share/ca-certificates/1798370.pem
I1216 02:41:54.326055 1836804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1798370.pem
I1216 02:41:54.366934 1836804 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1216 02:41:54.374558 1836804 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1798370.pem /etc/ssl/certs/51391683.0
I1216 02:41:54.382069 1836804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1216 02:41:54.385804 1836804 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1216 02:41:54.385850 1836804 kubeadm.go:401] StartCluster: {Name:functional-389759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-389759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1216 02:41:54.385921 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1216 02:41:54.385992 1836804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1216 02:41:54.413389 1836804 cri.go:89] found id: ""
I1216 02:41:54.413457 1836804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1216 02:41:54.421422 1836804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1216 02:41:54.429246 1836804 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1216 02:41:54.429307 1836804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1216 02:41:54.437247 1836804 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1216 02:41:54.437267 1836804 kubeadm.go:158] found existing configuration files:
I1216 02:41:54.437319 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1216 02:41:54.445101 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1216 02:41:54.445170 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1216 02:41:54.452507 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1216 02:41:54.460108 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1216 02:41:54.460176 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1216 02:41:54.467661 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1216 02:41:54.475279 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1216 02:41:54.475348 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1216 02:41:54.483185 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1216 02:41:54.490947 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1216 02:41:54.491005 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1216 02:41:54.498694 1836804 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1216 02:41:54.546753 1836804 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1216 02:41:54.547107 1836804 kubeadm.go:319] [preflight] Running pre-flight checks
I1216 02:41:54.626707 1836804 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1216 02:41:54.626771 1836804 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1216 02:41:54.626806 1836804 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1216 02:41:54.626850 1836804 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1216 02:41:54.626897 1836804 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1216 02:41:54.626943 1836804 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1216 02:41:54.626990 1836804 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1216 02:41:54.627037 1836804 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1216 02:41:54.627111 1836804 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1216 02:41:54.627157 1836804 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1216 02:41:54.627204 1836804 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1216 02:41:54.627249 1836804 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1216 02:41:54.695797 1836804 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1216 02:41:54.695913 1836804 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1216 02:41:54.696023 1836804 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1216 02:41:54.703436 1836804 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1216 02:41:54.709920 1836804 out.go:252] - Generating certificates and keys ...
I1216 02:41:54.710028 1836804 kubeadm.go:319] [certs] Using existing ca certificate authority
I1216 02:41:54.710105 1836804 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1216 02:41:55.046737 1836804 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1216 02:41:55.606371 1836804 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1216 02:41:55.836961 1836804 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1216 02:41:55.940463 1836804 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1216 02:41:56.102129 1836804 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1216 02:41:56.102472 1836804 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-389759 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1216 02:41:56.457447 1836804 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1216 02:41:56.457942 1836804 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-389759 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1216 02:41:56.680049 1836804 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1216 02:41:56.879750 1836804 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1216 02:41:57.007794 1836804 kubeadm.go:319] [certs] Generating "sa" key and public key
I1216 02:41:57.007866 1836804 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1216 02:41:57.619803 1836804 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1216 02:41:57.852516 1836804 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1216 02:41:58.237294 1836804 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1216 02:41:58.493291 1836804 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1216 02:41:59.042206 1836804 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1216 02:41:59.042963 1836804 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1216 02:41:59.046273 1836804 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1216 02:41:59.050046 1836804 out.go:252] - Booting up control plane ...
I1216 02:41:59.050147 1836804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1216 02:41:59.050228 1836804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1216 02:41:59.050909 1836804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1216 02:41:59.068105 1836804 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1216 02:41:59.068401 1836804 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1216 02:41:59.075715 1836804 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1216 02:41:59.075997 1836804 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1216 02:41:59.076196 1836804 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1216 02:41:59.225479 1836804 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1216 02:41:59.225591 1836804 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1216 02:45:59.226442 1836804 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001007216s
I1216 02:45:59.226462 1836804 kubeadm.go:319]
I1216 02:45:59.226518 1836804 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1216 02:45:59.226555 1836804 kubeadm.go:319] - The kubelet is not running
I1216 02:45:59.226660 1836804 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1216 02:45:59.226664 1836804 kubeadm.go:319]
I1216 02:45:59.226767 1836804 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1216 02:45:59.226798 1836804 kubeadm.go:319] - 'systemctl status kubelet'
I1216 02:45:59.226853 1836804 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1216 02:45:59.226857 1836804 kubeadm.go:319]
I1216 02:45:59.232079 1836804 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1216 02:45:59.232553 1836804 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1216 02:45:59.232660 1836804 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1216 02:45:59.232938 1836804 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1216 02:45:59.232942 1836804 kubeadm.go:319]
W1216 02:45:59.233127 1836804 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-389759 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-389759 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001007216s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I1216 02:45:59.233221 1836804 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1216 02:45:59.233354 1836804 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1216 02:45:59.643275 1836804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1216 02:45:59.656370 1836804 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1216 02:45:59.656424 1836804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1216 02:45:59.664108 1836804 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1216 02:45:59.664116 1836804 kubeadm.go:158] found existing configuration files:
I1216 02:45:59.664176 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1216 02:45:59.672324 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1216 02:45:59.672388 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1216 02:45:59.680013 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1216 02:45:59.687704 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1216 02:45:59.687759 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1216 02:45:59.695199 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1216 02:45:59.702823 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1216 02:45:59.702886 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1216 02:45:59.710304 1836804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1216 02:45:59.718282 1836804 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1216 02:45:59.718340 1836804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1216 02:45:59.726059 1836804 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1216 02:45:59.767401 1836804 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1216 02:45:59.767596 1836804 kubeadm.go:319] [preflight] Running pre-flight checks
I1216 02:45:59.835846 1836804 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1216 02:45:59.835908 1836804 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1216 02:45:59.835940 1836804 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1216 02:45:59.835982 1836804 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1216 02:45:59.836026 1836804 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1216 02:45:59.836069 1836804 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1216 02:45:59.836113 1836804 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1216 02:45:59.836157 1836804 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1216 02:45:59.836200 1836804 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1216 02:45:59.836242 1836804 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1216 02:45:59.836286 1836804 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1216 02:45:59.836328 1836804 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1216 02:45:59.903760 1836804 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1216 02:45:59.903859 1836804 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1216 02:45:59.903943 1836804 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1216 02:45:59.915476 1836804 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1216 02:45:59.920820 1836804 out.go:252] - Generating certificates and keys ...
I1216 02:45:59.920913 1836804 kubeadm.go:319] [certs] Using existing ca certificate authority
I1216 02:45:59.920981 1836804 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1216 02:45:59.921058 1836804 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1216 02:45:59.921121 1836804 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1216 02:45:59.921192 1836804 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1216 02:45:59.921242 1836804 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1216 02:45:59.921306 1836804 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1216 02:45:59.921369 1836804 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1216 02:45:59.921449 1836804 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1216 02:45:59.921522 1836804 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1216 02:45:59.921563 1836804 kubeadm.go:319] [certs] Using the existing "sa" key
I1216 02:45:59.921616 1836804 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1216 02:46:00.156452 1836804 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1216 02:46:00.507591 1836804 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1216 02:46:01.084996 1836804 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1216 02:46:01.620158 1836804 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1216 02:46:01.821348 1836804 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1216 02:46:01.821979 1836804 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1216 02:46:01.825522 1836804 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1216 02:46:01.828839 1836804 out.go:252] - Booting up control plane ...
I1216 02:46:01.828936 1836804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1216 02:46:01.829015 1836804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1216 02:46:01.829439 1836804 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1216 02:46:01.849660 1836804 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1216 02:46:01.849940 1836804 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1216 02:46:01.857335 1836804 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1216 02:46:01.857593 1836804 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1216 02:46:01.857769 1836804 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1216 02:46:02.003938 1836804 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1216 02:46:02.004086 1836804 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1216 02:50:01.999350 1836804 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001643344s
I1216 02:50:01.999373 1836804 kubeadm.go:319]
I1216 02:50:01.999432 1836804 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1216 02:50:01.999470 1836804 kubeadm.go:319] - The kubelet is not running
I1216 02:50:02.001787 1836804 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1216 02:50:02.001815 1836804 kubeadm.go:319]
I1216 02:50:02.002008 1836804 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1216 02:50:02.002064 1836804 kubeadm.go:319] - 'systemctl status kubelet'
I1216 02:50:02.002446 1836804 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1216 02:50:02.002454 1836804 kubeadm.go:319]
I1216 02:50:02.011021 1836804 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1216 02:50:02.011535 1836804 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1216 02:50:02.011685 1836804 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1216 02:50:02.011946 1836804 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1216 02:50:02.011955 1836804 kubeadm.go:319]
I1216 02:50:02.012023 1836804 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1216 02:50:02.012085 1836804 kubeadm.go:403] duration metric: took 8m7.626242328s to StartCluster
I1216 02:50:02.012122 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1216 02:50:02.012189 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1216 02:50:02.037959 1836804 cri.go:89] found id: ""
I1216 02:50:02.037984 1836804 logs.go:282] 0 containers: []
W1216 02:50:02.037991 1836804 logs.go:284] No container was found matching "kube-apiserver"
I1216 02:50:02.037997 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1216 02:50:02.038059 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1216 02:50:02.070297 1836804 cri.go:89] found id: ""
I1216 02:50:02.070312 1836804 logs.go:282] 0 containers: []
W1216 02:50:02.070319 1836804 logs.go:284] No container was found matching "etcd"
I1216 02:50:02.070324 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1216 02:50:02.070390 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1216 02:50:02.097654 1836804 cri.go:89] found id: ""
I1216 02:50:02.097669 1836804 logs.go:282] 0 containers: []
W1216 02:50:02.097676 1836804 logs.go:284] No container was found matching "coredns"
I1216 02:50:02.097681 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1216 02:50:02.097740 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1216 02:50:02.126757 1836804 cri.go:89] found id: ""
I1216 02:50:02.126772 1836804 logs.go:282] 0 containers: []
W1216 02:50:02.126779 1836804 logs.go:284] No container was found matching "kube-scheduler"
I1216 02:50:02.126784 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1216 02:50:02.126844 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1216 02:50:02.162393 1836804 cri.go:89] found id: ""
I1216 02:50:02.162407 1836804 logs.go:282] 0 containers: []
W1216 02:50:02.162415 1836804 logs.go:284] No container was found matching "kube-proxy"
I1216 02:50:02.162420 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1216 02:50:02.162483 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1216 02:50:02.190116 1836804 cri.go:89] found id: ""
I1216 02:50:02.190132 1836804 logs.go:282] 0 containers: []
W1216 02:50:02.190140 1836804 logs.go:284] No container was found matching "kube-controller-manager"
I1216 02:50:02.190146 1836804 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1216 02:50:02.190220 1836804 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1216 02:50:02.216152 1836804 cri.go:89] found id: ""
I1216 02:50:02.216179 1836804 logs.go:282] 0 containers: []
W1216 02:50:02.216188 1836804 logs.go:284] No container was found matching "kindnet"
I1216 02:50:02.216196 1836804 logs.go:123] Gathering logs for kubelet ...
I1216 02:50:02.216206 1836804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1216 02:50:02.274351 1836804 logs.go:123] Gathering logs for dmesg ...
I1216 02:50:02.274370 1836804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1216 02:50:02.292154 1836804 logs.go:123] Gathering logs for describe nodes ...
I1216 02:50:02.292171 1836804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1216 02:50:02.358339 1836804 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1216 02:50:02.349506 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1216 02:50:02.350079 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1216 02:50:02.351854 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1216 02:50:02.352317 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1216 02:50:02.353783 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1216 02:50:02.349506 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1216 02:50:02.350079 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1216 02:50:02.351854 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1216 02:50:02.352317 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1216 02:50:02.353783 4798 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1216 02:50:02.358352 1836804 logs.go:123] Gathering logs for containerd ...
I1216 02:50:02.358362 1836804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1216 02:50:02.396543 1836804 logs.go:123] Gathering logs for container status ...
I1216 02:50:02.396563 1836804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1216 02:50:02.426020 1836804 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001643344s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1216 02:50:02.426059 1836804 out.go:285] *
W1216 02:50:02.426140 1836804 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001643344s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1216 02:50:02.426158 1836804 out.go:285] *
W1216 02:50:02.428541 1836804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1216 02:50:02.434930 1836804 out.go:203]
W1216 02:50:02.437781 1836804 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001643344s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1216 02:50:02.437829 1836804 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1216 02:50:02.437850 1836804 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1216 02:50:02.441002 1836804 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851200569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851269277Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851378460Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851449491Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851508763Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851568634Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851629023Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851691364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851758382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.851845067Z" level=info msg="Connect containerd service"
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.852205272Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.852902281Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.865826242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.865895336Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.865946986Z" level=info msg="Start subscribing containerd event"
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.866002484Z" level=info msg="Start recovering state"
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908194480Z" level=info msg="Start event monitor"
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908246515Z" level=info msg="Start cni network conf syncer for default"
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908257592Z" level=info msg="Start streaming server"
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908266790Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908275134Z" level=info msg="runtime interface starting up..."
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908281148Z" level=info msg="starting plugins..."
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908294161Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 16 02:41:52 functional-389759 containerd[765]: time="2025-12-16T02:41:52.908571980Z" level=info msg="containerd successfully booted in 0.086424s"
Dec 16 02:41:52 functional-389759 systemd[1]: Started containerd.service - containerd container runtime.
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1216 02:50:03.407333 4916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1216 02:50:03.408202 4916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1216 02:50:03.409830 4916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1216 02:50:03.410189 4916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1216 02:50:03.411726 4916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[ +41.065751] overlayfs: idmapped layers are currently not supported
[Dec16 01:35] overlayfs: idmapped layers are currently not supported
[Dec16 01:36] overlayfs: idmapped layers are currently not supported
[Dec16 01:37] overlayfs: idmapped layers are currently not supported
[Dec16 01:39] overlayfs: idmapped layers are currently not supported
[Dec16 01:41] overlayfs: idmapped layers are currently not supported
[Dec16 01:52] overlayfs: idmapped layers are currently not supported
[Dec16 01:53] overlayfs: idmapped layers are currently not supported
[Dec16 01:54] overlayfs: idmapped layers are currently not supported
[ +4.093900] overlayfs: idmapped layers are currently not supported
[Dec16 01:55] overlayfs: idmapped layers are currently not supported
[Dec16 01:56] overlayfs: idmapped layers are currently not supported
[Dec16 01:57] overlayfs: idmapped layers are currently not supported
[Dec16 01:58] overlayfs: idmapped layers are currently not supported
[ +0.991766] overlayfs: idmapped layers are currently not supported
[Dec16 02:00] overlayfs: idmapped layers are currently not supported
[ +1.213477] overlayfs: idmapped layers are currently not supported
[Dec16 02:01] overlayfs: idmapped layers are currently not supported
[Dec16 02:18] overlayfs: idmapped layers are currently not supported
[Dec16 02:20] overlayfs: idmapped layers are currently not supported
[Dec16 02:22] overlayfs: idmapped layers are currently not supported
[Dec16 02:24] overlayfs: idmapped layers are currently not supported
[Dec16 02:25] overlayfs: idmapped layers are currently not supported
[Dec16 02:27] overlayfs: idmapped layers are currently not supported
[Dec16 02:29] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
02:50:03 up 8:32, 0 user, load average: 0.04, 0.51, 1.12
Linux functional-389759 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 16 02:50:00 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 16 02:50:01 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 16 02:50:01 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 16 02:50:01 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 16 02:50:01 functional-389759 kubelet[4722]: E1216 02:50:01.236735 4722 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 16 02:50:01 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 16 02:50:01 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 16 02:50:01 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 16 02:50:01 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 16 02:50:01 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 16 02:50:01 functional-389759 kubelet[4728]: E1216 02:50:01.987782 4728 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 16 02:50:01 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 16 02:50:01 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 16 02:50:02 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 16 02:50:02 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 16 02:50:02 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 16 02:50:02 functional-389759 kubelet[4825]: E1216 02:50:02.761767 4825 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 16 02:50:02 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 16 02:50:02 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 16 02:50:03 functional-389759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 16 02:50:03 functional-389759 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 16 02:50:03 functional-389759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 16 02:50:03 functional-389759 kubelet[4924]: E1216 02:50:03.492495 4924 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 16 02:50:03 functional-389759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 16 02:50:03 functional-389759 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-389759 -n functional-389759: exit status 6 (334.379902ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1216 02:50:03.878381 1842531 status.go:458] kubeconfig endpoint: get endpoint: "functional-389759" does not appear in /home/jenkins/minikube-integration/22158-1796512/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-389759" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.07s)