=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1202 21:02:36.513725 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:03:04.221077 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.126930 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.133332 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.144827 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.166298 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.207949 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.289492 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.451250 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:44.773066 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:45.415410 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:46.697342 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:49.260318 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:04:54.382320 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:05:04.624407 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:05:25.106043 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:06:06.067675 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:07:27.989693 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-446665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 21:07:36.513767 263241 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/addons-409059/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m26.066342345s)
-- stdout --
* [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=21997
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-753958" primary control-plane node in "functional-753958" cluster
* Pulling base image v0.0.48-1764169655-21974 ...
* Found network options:
- HTTP_PROXY=localhost:41313
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:41313 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-753958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-753958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000311038s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000050144s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000050144s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-753958
helpers_test.go:243: (dbg) docker inspect functional-753958:
-- stdout --
[
{
"Id": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
"Created": "2025-12-02T21:00:39.470229988Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 301734,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-02T21:00:39.535019201Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
"ResolvConfPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hostname",
"HostsPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/hosts",
"LogPath": "/var/lib/docker/containers/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a/321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a-json.log",
"Name": "/functional-753958",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-753958:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-753958",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "321ef4a88b51fbfdd50a39497fc9a9098fdd9bf5b7fe96859fc4d1789c73770a",
"LowerDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438-init/diff:/var/lib/docker/overlay2/ec0ae388c0f1f7024fff6d96e1d44b5d2c6ae7046de01cfec85114eb00488fd9/diff",
"MergedDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/merged",
"UpperDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/diff",
"WorkDir": "/var/lib/docker/overlay2/3665b15b8aefa2bed683ede85f9d5fb00ccdb82d55dd4df5dd60464481771438/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-753958",
"Source": "/var/lib/docker/volumes/functional-753958/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-753958",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-753958",
"name.minikube.sigs.k8s.io": "functional-753958",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "44df82336b1507d3d877e818baebb098332071ab7b3e3f7343e15c1fe55b3ab1",
"SandboxKey": "/var/run/docker/netns/44df82336b15",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33108"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33109"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33112"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33110"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33111"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-753958": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "9a:7f:7f:d7:c5:84",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "0e90d0c1216d32743827f22180e4e07c31360f0f3cc3431312aff46869716bb9",
"EndpointID": "5ead8efafa1df1b03c8f1f51c032157081a17706bc48186adc0670bc42c0b521",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-753958",
"321ef4a88b51"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-753958 -n functional-753958: exit status 6 (342.311802ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1202 21:09:04.647822 307441 status.go:458] kubeconfig endpoint: get endpoint: "functional-753958" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p functional-753958 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-446665 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ ssh │ functional-446665 ssh sudo cat /etc/ssl/certs/2632412.pem │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image ls │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ ssh │ functional-446665 ssh sudo cat /usr/share/ca-certificates/2632412.pem │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ ssh │ functional-446665 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image load --daemon kicbase/echo-server:functional-446665 --alsologtostderr │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ ssh │ functional-446665 ssh sudo cat /etc/test/nested/copy/263241/hosts │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image ls │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image save kicbase/echo-server:functional-446665 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image rm kicbase/echo-server:functional-446665 --alsologtostderr │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image ls │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ update-context │ functional-446665 update-context --alsologtostderr -v=2 │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ update-context │ functional-446665 update-context --alsologtostderr -v=2 │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ update-context │ functional-446665 update-context --alsologtostderr -v=2 │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image ls │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image save --daemon kicbase/echo-server:functional-446665 --alsologtostderr │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image ls --format yaml --alsologtostderr │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image ls --format short --alsologtostderr │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image ls --format json --alsologtostderr │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image ls --format table --alsologtostderr │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ ssh │ functional-446665 ssh pgrep buildkitd │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ │
│ image │ functional-446665 image build -t localhost/my-image:functional-446665 testdata/build --alsologtostderr │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ image │ functional-446665 image ls │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ delete │ -p functional-446665 │ functional-446665 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ 02 Dec 25 21:00 UTC │
│ start │ -p functional-753958 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-753958 │ jenkins │ v1.37.0 │ 02 Dec 25 21:00 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/02 21:00:38
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.25.3 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1202 21:00:38.271371 301428 out.go:360] Setting OutFile to fd 1 ...
I1202 21:00:38.271474 301428 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:00:38.271478 301428 out.go:374] Setting ErrFile to fd 2...
I1202 21:00:38.271482 301428 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 21:00:38.271751 301428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-261381/.minikube/bin
I1202 21:00:38.272150 301428 out.go:368] Setting JSON to false
I1202 21:00:38.272927 301428 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9777,"bootTime":1764699462,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1202 21:00:38.272984 301428 start.go:143] virtualization:
I1202 21:00:38.277384 301428 out.go:179] * [functional-753958] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1202 21:00:38.282008 301428 out.go:179] - MINIKUBE_LOCATION=21997
I1202 21:00:38.282136 301428 notify.go:221] Checking for updates...
I1202 21:00:38.288781 301428 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1202 21:00:38.293076 301428 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21997-261381/kubeconfig
I1202 21:00:38.296128 301428 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-261381/.minikube
I1202 21:00:38.299257 301428 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1202 21:00:38.302323 301428 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1202 21:00:38.305550 301428 driver.go:422] Setting default libvirt URI to qemu:///system
I1202 21:00:38.339228 301428 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1202 21:00:38.339333 301428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 21:00:38.395896 301428 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 21:00:38.387077419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 21:00:38.395991 301428 docker.go:319] overlay module found
I1202 21:00:38.399209 301428 out.go:179] * Using the docker driver based on user configuration
I1202 21:00:38.402137 301428 start.go:309] selected driver: docker
I1202 21:00:38.402149 301428 start.go:927] validating driver "docker" against <nil>
I1202 21:00:38.402161 301428 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1202 21:00:38.402895 301428 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 21:00:38.457208 301428 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 21:00:38.448783961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 21:00:38.457360 301428 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1202 21:00:38.457571 301428 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1202 21:00:38.460712 301428 out.go:179] * Using Docker driver with root privileges
I1202 21:00:38.463650 301428 cni.go:84] Creating CNI manager for ""
I1202 21:00:38.463707 301428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1202 21:00:38.463714 301428 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1202 21:00:38.463789 301428 start.go:353] cluster config:
{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1202 21:00:38.466962 301428 out.go:179] * Starting "functional-753958" primary control-plane node in "functional-753958" cluster
I1202 21:00:38.469854 301428 cache.go:134] Beginning downloading kic base image for docker with containerd
I1202 21:00:38.472821 301428 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
I1202 21:00:38.475713 301428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1202 21:00:38.475786 301428 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
I1202 21:00:38.499704 301428 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
I1202 21:00:38.499715 301428 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
W1202 21:00:38.537639 301428 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
W1202 21:00:38.725625 301428 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
I1202 21:00:38.725816 301428 cache.go:107] acquiring lock: {Name:mke5dae17862187f473c65911f02cdffd3c2fff1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 21:00:38.725905 301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1202 21:00:38.725914 301428 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 111.627µs
I1202 21:00:38.725927 301428 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1202 21:00:38.725937 301428 cache.go:107] acquiring lock: {Name:mkf8cacd313205d2d6c311b56d9047bd16fb6fc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 21:00:38.725965 301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
I1202 21:00:38.725969 301428 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 33.616µs
I1202 21:00:38.725974 301428 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
I1202 21:00:38.725982 301428 cache.go:107] acquiring lock: {Name:mkbb0231b02f776087aceb642d9cba73e91dc6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 21:00:38.725998 301428 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
I1202 21:00:38.726006 301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
I1202 21:00:38.726010 301428 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 29.472µs
I1202 21:00:38.726016 301428 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
I1202 21:00:38.726023 301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json: {Name:mke67144143d67dc20fbcc161445c3218cd8c3b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 21:00:38.726025 301428 cache.go:107] acquiring lock: {Name:mk847fad322ba7dc5e542c96df60fa2fcfb416f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 21:00:38.726054 301428 cache.go:107] acquiring lock: {Name:mk29f4a321c45b849306dc37a02b7559fb0163c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 21:00:38.726129 301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
I1202 21:00:38.726136 301428 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 83.485µs
I1202 21:00:38.726141 301428 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
I1202 21:00:38.726150 301428 cache.go:107] acquiring lock: {Name:mk30f4466a9f50de9b3a523091d981af15cd4a2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 21:00:38.726176 301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
I1202 21:00:38.726180 301428 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 30.883µs
I1202 21:00:38.726184 301428 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
I1202 21:00:38.726193 301428 cache.go:243] Successfully downloaded all kic artifacts
I1202 21:00:38.726193 301428 cache.go:107] acquiring lock: {Name:mkdcedcf97d0eca2b6f4182aa3746e4c16a845fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 21:00:38.726217 301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
I1202 21:00:38.726217 301428 start.go:360] acquireMachinesLock for functional-753958: {Name:mk3203202a2efc5b27c2a0a16d932dc1b1f07522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 21:00:38.726221 301428 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 29.915µs
I1202 21:00:38.726226 301428 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
I1202 21:00:38.726234 301428 cache.go:107] acquiring lock: {Name:mk77a50d68e5038d38db72d37692b17b5e88f7f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 21:00:38.726259 301428 start.go:364] duration metric: took 34.223µs to acquireMachinesLock for "functional-753958"
I1202 21:00:38.726264 301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
I1202 21:00:38.726268 301428 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 35.404µs
I1202 21:00:38.726272 301428 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
I1202 21:00:38.726286 301428 cache.go:115] /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
I1202 21:00:38.726290 301428 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 266.249µs
I1202 21:00:38.726300 301428 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
I1202 21:00:38.726307 301428 cache.go:87] Successfully saved all images to host disk.
I1202 21:00:38.726276 301428 start.go:93] Provisioning new machine with config: &{Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1202 21:00:38.726332 301428 start.go:125] createHost starting for "" (driver="docker")
I1202 21:00:38.729746 301428 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1202 21:00:38.729991 301428 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:41313 to docker env.
I1202 21:00:38.730027 301428 start.go:159] libmachine.API.Create for "functional-753958" (driver="docker")
I1202 21:00:38.730054 301428 client.go:173] LocalClient.Create starting
I1202 21:00:38.730113 301428 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem
I1202 21:00:38.730143 301428 main.go:143] libmachine: Decoding PEM data...
I1202 21:00:38.730161 301428 main.go:143] libmachine: Parsing certificate...
I1202 21:00:38.730217 301428 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem
I1202 21:00:38.730232 301428 main.go:143] libmachine: Decoding PEM data...
I1202 21:00:38.730244 301428 main.go:143] libmachine: Parsing certificate...
I1202 21:00:38.730593 301428 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1202 21:00:38.746644 301428 cli_runner.go:211] docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1202 21:00:38.746724 301428 network_create.go:284] running [docker network inspect functional-753958] to gather additional debugging logs...
I1202 21:00:38.746740 301428 cli_runner.go:164] Run: docker network inspect functional-753958
W1202 21:00:38.762057 301428 cli_runner.go:211] docker network inspect functional-753958 returned with exit code 1
I1202 21:00:38.762078 301428 network_create.go:287] error running [docker network inspect functional-753958]: docker network inspect functional-753958: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-753958 not found
I1202 21:00:38.762113 301428 network_create.go:289] output of [docker network inspect functional-753958]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-753958 not found
** /stderr **
I1202 21:00:38.762213 301428 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 21:00:38.778018 301428 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198b550}
I1202 21:00:38.778047 301428 network_create.go:124] attempt to create docker network functional-753958 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1202 21:00:38.778100 301428 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-753958 functional-753958
I1202 21:00:38.832894 301428 network_create.go:108] docker network functional-753958 192.168.49.0/24 created
I1202 21:00:38.832917 301428 kic.go:121] calculated static IP "192.168.49.2" for the "functional-753958" container
I1202 21:00:38.833004 301428 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1202 21:00:38.848486 301428 cli_runner.go:164] Run: docker volume create functional-753958 --label name.minikube.sigs.k8s.io=functional-753958 --label created_by.minikube.sigs.k8s.io=true
I1202 21:00:38.865344 301428 oci.go:103] Successfully created a docker volume functional-753958
I1202 21:00:38.865419 301428 cli_runner.go:164] Run: docker run --rm --name functional-753958-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-753958 --entrypoint /usr/bin/test -v functional-753958:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
I1202 21:00:39.394730 301428 oci.go:107] Successfully prepared a docker volume functional-753958
I1202 21:00:39.394796 301428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
W1202 21:00:39.394933 301428 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1202 21:00:39.395032 301428 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1202 21:00:39.456690 301428 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-753958 --name functional-753958 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-753958 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-753958 --network functional-753958 --ip 192.168.49.2 --volume functional-753958:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
I1202 21:00:39.750661 301428 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Running}}
I1202 21:00:39.773106 301428 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
I1202 21:00:39.798111 301428 cli_runner.go:164] Run: docker exec functional-753958 stat /var/lib/dpkg/alternatives/iptables
I1202 21:00:39.844919 301428 oci.go:144] the created container "functional-753958" has a running status.
I1202 21:00:39.844938 301428 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa...
I1202 21:00:40.139684 301428 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1202 21:00:40.178510 301428 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
I1202 21:00:40.199530 301428 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1202 21:00:40.199542 301428 kic_runner.go:114] Args: [docker exec --privileged functional-753958 chown docker:docker /home/docker/.ssh/authorized_keys]
I1202 21:00:40.262646 301428 cli_runner.go:164] Run: docker container inspect functional-753958 --format={{.State.Status}}
I1202 21:00:40.290146 301428 machine.go:94] provisionDockerMachine start ...
I1202 21:00:40.290285 301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:00:40.311397 301428 main.go:143] libmachine: Using SSH client type: native
I1202 21:00:40.311714 301428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 33108 <nil> <nil>}
I1202 21:00:40.311720 301428 main.go:143] libmachine: About to run SSH command:
hostname
I1202 21:00:40.312365 301428 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41436->127.0.0.1:33108: read: connection reset by peer
I1202 21:00:43.461414 301428 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
I1202 21:00:43.461428 301428 ubuntu.go:182] provisioning hostname "functional-753958"
I1202 21:00:43.461548 301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:00:43.479235 301428 main.go:143] libmachine: Using SSH client type: native
I1202 21:00:43.479540 301428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 33108 <nil> <nil>}
I1202 21:00:43.479548 301428 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-753958 && echo "functional-753958" | sudo tee /etc/hostname
I1202 21:00:43.638625 301428 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-753958
I1202 21:00:43.638704 301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:00:43.656435 301428 main.go:143] libmachine: Using SSH client type: native
I1202 21:00:43.656742 301428 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 33108 <nil> <nil>}
I1202 21:00:43.656755 301428 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-753958' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753958/g' /etc/hosts;
else
echo '127.0.1.1 functional-753958' | sudo tee -a /etc/hosts;
fi
fi
I1202 21:00:43.806257 301428 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1202 21:00:43.806274 301428 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-261381/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-261381/.minikube}
I1202 21:00:43.806292 301428 ubuntu.go:190] setting up certificates
I1202 21:00:43.806300 301428 provision.go:84] configureAuth start
I1202 21:00:43.806375 301428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
I1202 21:00:43.825519 301428 provision.go:143] copyHostCerts
I1202 21:00:43.825579 301428 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem, removing ...
I1202 21:00:43.825587 301428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem
I1202 21:00:43.825705 301428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/ca.pem (1082 bytes)
I1202 21:00:43.825795 301428 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem, removing ...
I1202 21:00:43.825804 301428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem
I1202 21:00:43.825831 301428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/cert.pem (1123 bytes)
I1202 21:00:43.825880 301428 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem, removing ...
I1202 21:00:43.825889 301428 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem
I1202 21:00:43.825912 301428 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-261381/.minikube/key.pem (1675 bytes)
I1202 21:00:43.825954 301428 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem org=jenkins.functional-753958 san=[127.0.0.1 192.168.49.2 functional-753958 localhost minikube]
I1202 21:00:44.143402 301428 provision.go:177] copyRemoteCerts
I1202 21:00:44.143482 301428 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1202 21:00:44.143522 301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:00:44.161759 301428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
I1202 21:00:44.265307 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1202 21:00:44.282968 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1202 21:00:44.300706 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1202 21:00:44.318213 301428 provision.go:87] duration metric: took 511.877903ms to configureAuth
I1202 21:00:44.318230 301428 ubuntu.go:206] setting minikube options for container-runtime
I1202 21:00:44.318415 301428 config.go:182] Loaded profile config "functional-753958": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 21:00:44.318421 301428 machine.go:97] duration metric: took 4.0282656s to provisionDockerMachine
I1202 21:00:44.318426 301428 client.go:176] duration metric: took 5.588368064s to LocalClient.Create
I1202 21:00:44.318451 301428 start.go:167] duration metric: took 5.588424997s to libmachine.API.Create "functional-753958"
I1202 21:00:44.318458 301428 start.go:293] postStartSetup for "functional-753958" (driver="docker")
I1202 21:00:44.318467 301428 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1202 21:00:44.318520 301428 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1202 21:00:44.318564 301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:00:44.338209 301428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
I1202 21:00:44.441688 301428 ssh_runner.go:195] Run: cat /etc/os-release
I1202 21:00:44.445112 301428 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1202 21:00:44.445129 301428 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1202 21:00:44.445144 301428 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/addons for local assets ...
I1202 21:00:44.445196 301428 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-261381/.minikube/files for local assets ...
I1202 21:00:44.445284 301428 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem -> 2632412.pem in /etc/ssl/certs
I1202 21:00:44.445357 301428 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts -> hosts in /etc/test/nested/copy/263241
I1202 21:00:44.445404 301428 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/263241
I1202 21:00:44.452966 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /etc/ssl/certs/2632412.pem (1708 bytes)
I1202 21:00:44.470133 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/test/nested/copy/263241/hosts --> /etc/test/nested/copy/263241/hosts (40 bytes)
I1202 21:00:44.487877 301428 start.go:296] duration metric: took 169.40589ms for postStartSetup
I1202 21:00:44.488249 301428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
I1202 21:00:44.510451 301428 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/config.json ...
I1202 21:00:44.510747 301428 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1202 21:00:44.510793 301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:00:44.532211 301428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
I1202 21:00:44.634907 301428 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1202 21:00:44.639662 301428 start.go:128] duration metric: took 5.913317726s to createHost
I1202 21:00:44.639677 301428 start.go:83] releasing machines lock for "functional-753958", held for 5.913412049s
I1202 21:00:44.639746 301428 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753958
I1202 21:00:44.660700 301428 out.go:179] * Found network options:
I1202 21:00:44.663601 301428 out.go:179] - HTTP_PROXY=localhost:41313
W1202 21:00:44.666529 301428 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1202 21:00:44.669529 301428 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1202 21:00:44.672482 301428 ssh_runner.go:195] Run: cat /version.json
I1202 21:00:44.672533 301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:00:44.672555 301428 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1202 21:00:44.672604 301428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753958
I1202 21:00:44.695363 301428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
I1202 21:00:44.695936 301428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21997-261381/.minikube/machines/functional-753958/id_rsa Username:docker}
I1202 21:00:44.797239 301428 ssh_runner.go:195] Run: systemctl --version
I1202 21:00:44.894676 301428 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1202 21:00:44.899211 301428 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1202 21:00:44.899272 301428 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1202 21:00:44.926676 301428 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1202 21:00:44.926689 301428 start.go:496] detecting cgroup driver to use...
I1202 21:00:44.926721 301428 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1202 21:00:44.926769 301428 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1202 21:00:44.941910 301428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1202 21:00:44.955039 301428 docker.go:218] disabling cri-docker service (if available) ...
I1202 21:00:44.955093 301428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1202 21:00:44.973112 301428 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1202 21:00:44.992757 301428 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1202 21:00:45.261542 301428 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1202 21:00:45.418625 301428 docker.go:234] disabling docker service ...
I1202 21:00:45.418679 301428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1202 21:00:45.441648 301428 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1202 21:00:45.455862 301428 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1202 21:00:45.580712 301428 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1202 21:00:45.703515 301428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1202 21:00:45.716886 301428 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1202 21:00:45.731348 301428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1202 21:00:45.740544 301428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1202 21:00:45.749590 301428 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1202 21:00:45.749670 301428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1202 21:00:45.759826 301428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1202 21:00:45.768600 301428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1202 21:00:45.776994 301428 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1202 21:00:45.785549 301428 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1202 21:00:45.793578 301428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1202 21:00:45.802662 301428 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1202 21:00:45.810974 301428 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1202 21:00:45.819831 301428 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1202 21:00:45.827472 301428 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1202 21:00:45.834803 301428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1202 21:00:45.961142 301428 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1202 21:00:46.057083 301428 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1202 21:00:46.057145 301428 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1202 21:00:46.061374 301428 start.go:564] Will wait 60s for crictl version
I1202 21:00:46.061431 301428 ssh_runner.go:195] Run: which crictl
I1202 21:00:46.065495 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1202 21:00:46.090836 301428 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.1.5
RuntimeApiVersion: v1
I1202 21:00:46.090896 301428 ssh_runner.go:195] Run: containerd --version
I1202 21:00:46.112045 301428 ssh_runner.go:195] Run: containerd --version
I1202 21:00:46.136088 301428 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
I1202 21:00:46.139103 301428 cli_runner.go:164] Run: docker network inspect functional-753958 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 21:00:46.156482 301428 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1202 21:00:46.160632 301428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1202 21:00:46.170869 301428 kubeadm.go:884] updating cluster {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1202 21:00:46.170972 301428 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1202 21:00:46.171034 301428 ssh_runner.go:195] Run: sudo crictl images --output json
I1202 21:00:46.195477 301428 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
I1202 21:00:46.195492 301428 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
I1202 21:00:46.195557 301428 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1202 21:00:46.195788 301428 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 21:00:46.195888 301428 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 21:00:46.195984 301428 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 21:00:46.196087 301428 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 21:00:46.196248 301428 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
I1202 21:00:46.196348 301428 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
I1202 21:00:46.196435 301428 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
I1202 21:00:46.198187 301428 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
I1202 21:00:46.198569 301428 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 21:00:46.198725 301428 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
I1202 21:00:46.198913 301428 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 21:00:46.199046 301428 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1202 21:00:46.199266 301428 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 21:00:46.199425 301428 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 21:00:46.199558 301428 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
I1202 21:00:46.552326 301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
I1202 21:00:46.552394 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
I1202 21:00:46.555504 301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
I1202 21:00:46.555596 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 21:00:46.567685 301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
I1202 21:00:46.567780 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 21:00:46.574356 301428 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
I1202 21:00:46.574392 301428 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
I1202 21:00:46.574438 301428 ssh_runner.go:195] Run: which crictl
I1202 21:00:46.587647 301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
I1202 21:00:46.587708 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 21:00:46.601054 301428 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
I1202 21:00:46.601084 301428 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 21:00:46.601132 301428 ssh_runner.go:195] Run: which crictl
I1202 21:00:46.601180 301428 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
I1202 21:00:46.601191 301428 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 21:00:46.601212 301428 ssh_runner.go:195] Run: which crictl
I1202 21:00:46.601261 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1202 21:00:46.618715 301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
I1202 21:00:46.618774 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
I1202 21:00:46.631791 301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
I1202 21:00:46.631849 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 21:00:46.632282 301428 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
I1202 21:00:46.632309 301428 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 21:00:46.632362 301428 ssh_runner.go:195] Run: which crictl
I1202 21:00:46.632901 301428 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
I1202 21:00:46.632940 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
I1202 21:00:46.643461 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1202 21:00:46.643526 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 21:00:46.643575 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 21:00:46.683532 301428 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
I1202 21:00:46.683567 301428 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
I1202 21:00:46.683614 301428 ssh_runner.go:195] Run: which crictl
I1202 21:00:46.683738 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 21:00:46.683775 301428 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
I1202 21:00:46.683788 301428 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 21:00:46.683807 301428 ssh_runner.go:195] Run: which crictl
I1202 21:00:46.683844 301428 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
I1202 21:00:46.683854 301428 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
I1202 21:00:46.683872 301428 ssh_runner.go:195] Run: which crictl
I1202 21:00:46.724974 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1202 21:00:46.732713 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 21:00:46.762742 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 21:00:46.779604 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
I1202 21:00:46.779638 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1202 21:00:46.779707 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 21:00:46.779711 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 21:00:46.779775 301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
I1202 21:00:46.779870 301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
I1202 21:00:46.797714 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 21:00:46.873742 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 21:00:46.894215 301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
I1202 21:00:46.894243 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
I1202 21:00:46.894325 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 21:00:46.894378 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 21:00:46.894424 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1202 21:00:46.894469 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
I1202 21:00:46.895253 301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
I1202 21:00:46.895327 301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
I1202 21:00:46.981211 301428 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
I1202 21:00:46.981289 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
I1202 21:00:46.983853 301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
I1202 21:00:46.983943 301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
I1202 21:00:46.984004 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 21:00:46.984065 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1202 21:00:46.984109 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
I1202 21:00:46.984141 301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
I1202 21:00:46.984183 301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
I1202 21:00:46.984221 301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
I1202 21:00:46.984233 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
I1202 21:00:47.209535 301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
I1202 21:00:47.209567 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
I1202 21:00:47.209623 301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
I1202 21:00:47.209686 301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
I1202 21:00:47.209769 301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
I1202 21:00:47.209829 301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
I1202 21:00:47.209876 301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
I1202 21:00:47.209921 301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
I1202 21:00:47.209959 301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
I1202 21:00:47.210003 301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
I1202 21:00:47.210011 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
I1202 21:00:47.239682 301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
I1202 21:00:47.239708 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
I1202 21:00:47.239764 301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
I1202 21:00:47.239772 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
I1202 21:00:47.239810 301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
I1202 21:00:47.239818 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
W1202 21:00:47.435553 301428 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
I1202 21:00:47.435686 301428 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
I1202 21:00:47.435743 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
I1202 21:00:47.562322 301428 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
I1202 21:00:47.562369 301428 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1202 21:00:47.562427 301428 ssh_runner.go:195] Run: which crictl
I1202 21:00:47.595335 301428 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
I1202 21:00:47.595390 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
I1202 21:00:47.644679 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1202 21:00:48.887307 301428 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.291893114s)
I1202 21:00:48.887316 301428 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.242617273s)
I1202 21:00:48.887324 301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
I1202 21:00:48.887342 301428 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
I1202 21:00:48.887392 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
I1202 21:00:48.887393 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1202 21:00:48.916290 301428 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1202 21:00:49.909389 301428 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.02195967s)
I1202 21:00:49.909406 301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
I1202 21:00:49.909422 301428 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
I1202 21:00:49.909467 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
I1202 21:00:49.909560 301428 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
I1202 21:00:49.909623 301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I1202 21:00:50.834954 301428 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I1202 21:00:50.834981 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
I1202 21:00:50.835055 301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
I1202 21:00:50.835073 301428 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
I1202 21:00:50.835114 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
I1202 21:00:51.829464 301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
I1202 21:00:51.829496 301428 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
I1202 21:00:51.829548 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
I1202 21:00:52.776173 301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
I1202 21:00:52.776195 301428 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
I1202 21:00:52.776247 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
I1202 21:00:54.138711 301428 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.362441436s)
I1202 21:00:54.138738 301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
I1202 21:00:54.138761 301428 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1202 21:00:54.138815 301428 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1202 21:00:54.528051 301428 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-261381/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1202 21:00:54.528076 301428 cache_images.go:125] Successfully loaded all cached images
I1202 21:00:54.528080 301428 cache_images.go:94] duration metric: took 8.332576404s to LoadCachedImages
I1202 21:00:54.528091 301428 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1202 21:00:54.528184 301428 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753958 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1202 21:00:54.528249 301428 ssh_runner.go:195] Run: sudo crictl info
I1202 21:00:54.554040 301428 cni.go:84] Creating CNI manager for ""
I1202 21:00:54.554050 301428 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1202 21:00:54.554065 301428 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1202 21:00:54.554086 301428 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753958 NodeName:functional-753958 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1202 21:00:54.554193 301428 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-753958"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1202 21:00:54.554262 301428 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1202 21:00:54.562816 301428 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
Initiating transfer...
I1202 21:00:54.562870 301428 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
I1202 21:00:54.570767 301428 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
I1202 21:00:54.570857 301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
I1202 21:00:54.570930 301428 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
I1202 21:00:54.570955 301428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1202 21:00:54.571028 301428 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
I1202 21:00:54.571071 301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
I1202 21:00:54.587665 301428 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
I1202 21:00:54.587693 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
I1202 21:00:54.587749 301428 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
I1202 21:00:54.587758 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
I1202 21:00:54.587850 301428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
I1202 21:00:54.614261 301428 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
I1202 21:00:54.614296 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
I1202 21:00:55.432370 301428 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1202 21:00:55.440612 301428 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1202 21:00:55.452982 301428 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1202 21:00:55.465781 301428 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1202 21:00:55.479183 301428 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1202 21:00:55.482603 301428 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1202 21:00:55.492672 301428 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1202 21:00:55.613883 301428 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1202 21:00:55.629780 301428 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958 for IP: 192.168.49.2
I1202 21:00:55.629791 301428 certs.go:195] generating shared ca certs ...
I1202 21:00:55.629805 301428 certs.go:227] acquiring lock for ca certs: {Name:mka2387892f12c765de308129853400e49963e17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 21:00:55.629970 301428 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key
I1202 21:00:55.630023 301428 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key
I1202 21:00:55.630030 301428 certs.go:257] generating profile certs ...
I1202 21:00:55.630085 301428 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key
I1202 21:00:55.630095 301428 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt with IP's: []
I1202 21:00:55.700640 301428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt ...
I1202 21:00:55.700656 301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.crt: {Name:mk95c43ebb16136159ec2ca7da9d4919573669c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 21:00:55.700856 301428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key ...
I1202 21:00:55.700864 301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/client.key: {Name:mk35ad2659bb9cd65ddd7dddae15269e3dd02152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 21:00:55.700958 301428 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35
I1202 21:00:55.700973 301428 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt.c4f6fd35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1202 21:00:56.113158 301428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt.c4f6fd35 ...
I1202 21:00:56.113179 301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt.c4f6fd35: {Name:mke2de1b4c5a9107874bf83633e5b7fb4d15f840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 21:00:56.113398 301428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35 ...
I1202 21:00:56.113406 301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35: {Name:mkd207616980fb220b881bbc5091a8d96c41a549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 21:00:56.113491 301428 certs.go:382] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt.c4f6fd35 -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt
I1202 21:00:56.113569 301428 certs.go:386] copying /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key.c4f6fd35 -> /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key
I1202 21:00:56.113620 301428 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key
I1202 21:00:56.113632 301428 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt with IP's: []
I1202 21:00:56.311583 301428 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt ...
I1202 21:00:56.311598 301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt: {Name:mkbdd0c576e3f5093bb327c407dad458e38a479a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 21:00:56.311791 301428 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key ...
I1202 21:00:56.311799 301428 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key: {Name:mk94b055c24f3389515b29eae7f9843af752bdf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 21:00:56.311986 301428 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem (1338 bytes)
W1202 21:00:56.312027 301428 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241_empty.pem, impossibly tiny 0 bytes
I1202 21:00:56.312035 301428 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca-key.pem (1675 bytes)
I1202 21:00:56.312059 301428 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/ca.pem (1082 bytes)
I1202 21:00:56.312081 301428 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/cert.pem (1123 bytes)
I1202 21:00:56.312106 301428 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/certs/key.pem (1675 bytes)
I1202 21:00:56.312149 301428 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem (1708 bytes)
I1202 21:00:56.312720 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1202 21:00:56.330885 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1202 21:00:56.348969 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1202 21:00:56.366410 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1202 21:00:56.383949 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1202 21:00:56.401221 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1202 21:00:56.418985 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1202 21:00:56.436965 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/profiles/functional-753958/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1202 21:00:56.454317 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/files/etc/ssl/certs/2632412.pem --> /usr/share/ca-certificates/2632412.pem (1708 bytes)
I1202 21:00:56.471415 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1202 21:00:56.489446 301428 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-261381/.minikube/certs/263241.pem --> /usr/share/ca-certificates/263241.pem (1338 bytes)
I1202 21:00:56.507201 301428 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1202 21:00:56.520253 301428 ssh_runner.go:195] Run: openssl version
I1202 21:00:56.526341 301428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2632412.pem && ln -fs /usr/share/ca-certificates/2632412.pem /etc/ssl/certs/2632412.pem"
I1202 21:00:56.534702 301428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2632412.pem
I1202 21:00:56.538480 301428 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 2 21:00 /usr/share/ca-certificates/2632412.pem
I1202 21:00:56.538551 301428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2632412.pem
I1202 21:00:56.580527 301428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2632412.pem /etc/ssl/certs/3ec20f2e.0"
I1202 21:00:56.589528 301428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1202 21:00:56.599495 301428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1202 21:00:56.603317 301428 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 2 20:50 /usr/share/ca-certificates/minikubeCA.pem
I1202 21:00:56.603380 301428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1202 21:00:56.644499 301428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1202 21:00:56.652999 301428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/263241.pem && ln -fs /usr/share/ca-certificates/263241.pem /etc/ssl/certs/263241.pem"
I1202 21:00:56.661412 301428 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/263241.pem
I1202 21:00:56.665349 301428 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 2 21:00 /usr/share/ca-certificates/263241.pem
I1202 21:00:56.665408 301428 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/263241.pem
I1202 21:00:56.706618 301428 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/263241.pem /etc/ssl/certs/51391683.0"
I1202 21:00:56.715533 301428 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1202 21:00:56.719338 301428 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1202 21:00:56.719380 301428 kubeadm.go:401] StartCluster: {Name:functional-753958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-753958 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1202 21:00:56.719447 301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1202 21:00:56.719513 301428 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1202 21:00:56.744760 301428 cri.go:89] found id: ""
I1202 21:00:56.744830 301428 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1202 21:00:56.752551 301428 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1202 21:00:56.760232 301428 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1202 21:00:56.760290 301428 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1202 21:00:56.768056 301428 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1202 21:00:56.768083 301428 kubeadm.go:158] found existing configuration files:
I1202 21:00:56.768147 301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1202 21:00:56.775823 301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1202 21:00:56.775878 301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1202 21:00:56.783312 301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1202 21:00:56.790957 301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1202 21:00:56.791025 301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1202 21:00:56.798645 301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1202 21:00:56.806311 301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1202 21:00:56.806367 301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1202 21:00:56.813750 301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1202 21:00:56.821398 301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1202 21:00:56.821456 301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1202 21:00:56.828792 301428 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1202 21:00:56.865461 301428 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1202 21:00:56.865683 301428 kubeadm.go:319] [preflight] Running pre-flight checks
I1202 21:00:56.938115 301428 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1202 21:00:56.938192 301428 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1202 21:00:56.938234 301428 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1202 21:00:56.938277 301428 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1202 21:00:56.938324 301428 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1202 21:00:56.938382 301428 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1202 21:00:56.938445 301428 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1202 21:00:56.938493 301428 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1202 21:00:56.938553 301428 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1202 21:00:56.938606 301428 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1202 21:00:56.938653 301428 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1202 21:00:56.938706 301428 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1202 21:00:57.013820 301428 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1202 21:00:57.013922 301428 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1202 21:00:57.014011 301428 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1202 21:00:57.020254 301428 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1202 21:00:57.029064 301428 out.go:252] - Generating certificates and keys ...
I1202 21:00:57.029166 301428 kubeadm.go:319] [certs] Using existing ca certificate authority
I1202 21:00:57.029244 301428 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1202 21:00:57.490567 301428 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1202 21:00:57.584542 301428 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1202 21:00:57.958852 301428 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1202 21:00:58.126107 301428 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1202 21:00:58.471456 301428 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1202 21:00:58.471747 301428 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-753958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1202 21:00:58.816334 301428 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1202 21:00:58.816860 301428 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-753958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1202 21:00:58.937480 301428 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1202 21:00:59.153716 301428 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1202 21:00:59.313929 301428 kubeadm.go:319] [certs] Generating "sa" key and public key
I1202 21:00:59.314156 301428 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1202 21:00:59.433265 301428 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1202 21:00:59.938394 301428 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1202 21:01:00.730816 301428 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1202 21:01:01.099097 301428 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1202 21:01:01.378487 301428 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1202 21:01:01.379048 301428 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1202 21:01:01.392592 301428 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1202 21:01:01.454653 301428 out.go:252] - Booting up control plane ...
I1202 21:01:01.454771 301428 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1202 21:01:01.454946 301428 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1202 21:01:01.455020 301428 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1202 21:01:01.455122 301428 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1202 21:01:01.455214 301428 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1202 21:01:01.455324 301428 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1202 21:01:01.455406 301428 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1202 21:01:01.455444 301428 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1202 21:01:01.613618 301428 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1202 21:01:01.613759 301428 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1202 21:05:01.613870 301428 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000311038s
I1202 21:05:01.613902 301428 kubeadm.go:319]
I1202 21:05:01.613956 301428 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1202 21:05:01.613986 301428 kubeadm.go:319] - The kubelet is not running
I1202 21:05:01.614089 301428 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1202 21:05:01.614095 301428 kubeadm.go:319]
I1202 21:05:01.614204 301428 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1202 21:05:01.614239 301428 kubeadm.go:319] - 'systemctl status kubelet'
I1202 21:05:01.614267 301428 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1202 21:05:01.614270 301428 kubeadm.go:319]
I1202 21:05:01.617971 301428 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1202 21:05:01.618435 301428 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1202 21:05:01.618542 301428 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1202 21:05:01.618769 301428 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1202 21:05:01.618774 301428 kubeadm.go:319]
W1202 21:05:01.619015 301428 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-753958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-753958 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000311038s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1202 21:05:01.619132 301428 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1202 21:05:01.619527 301428 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1202 21:05:02.033955 301428 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1202 21:05:02.050302 301428 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1202 21:05:02.050374 301428 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1202 21:05:02.062576 301428 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1202 21:05:02.062591 301428 kubeadm.go:158] found existing configuration files:
I1202 21:05:02.062649 301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1202 21:05:02.071512 301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1202 21:05:02.071570 301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1202 21:05:02.079207 301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1202 21:05:02.089206 301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1202 21:05:02.089289 301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1202 21:05:02.100648 301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1202 21:05:02.109235 301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1202 21:05:02.109293 301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1202 21:05:02.116935 301428 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1202 21:05:02.126733 301428 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1202 21:05:02.126881 301428 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1202 21:05:02.137914 301428 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1202 21:05:02.181758 301428 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1202 21:05:02.181873 301428 kubeadm.go:319] [preflight] Running pre-flight checks
I1202 21:05:02.258861 301428 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1202 21:05:02.258951 301428 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1202 21:05:02.258993 301428 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1202 21:05:02.259037 301428 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1202 21:05:02.259085 301428 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1202 21:05:02.259131 301428 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1202 21:05:02.259178 301428 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1202 21:05:02.259225 301428 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1202 21:05:02.259273 301428 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1202 21:05:02.259320 301428 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1202 21:05:02.259367 301428 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1202 21:05:02.259413 301428 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1202 21:05:02.333801 301428 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1202 21:05:02.333916 301428 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1202 21:05:02.334023 301428 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1202 21:05:02.346082 301428 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1202 21:05:02.351335 301428 out.go:252] - Generating certificates and keys ...
I1202 21:05:02.351442 301428 kubeadm.go:319] [certs] Using existing ca certificate authority
I1202 21:05:02.351505 301428 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1202 21:05:02.351585 301428 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1202 21:05:02.351645 301428 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1202 21:05:02.351715 301428 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1202 21:05:02.351768 301428 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1202 21:05:02.351829 301428 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1202 21:05:02.351900 301428 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1202 21:05:02.351974 301428 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1202 21:05:02.352053 301428 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1202 21:05:02.352101 301428 kubeadm.go:319] [certs] Using the existing "sa" key
I1202 21:05:02.352156 301428 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1202 21:05:02.442806 301428 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1202 21:05:02.899145 301428 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1202 21:05:03.106047 301428 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1202 21:05:03.498121 301428 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1202 21:05:03.690328 301428 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1202 21:05:03.691070 301428 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1202 21:05:03.693619 301428 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1202 21:05:03.696683 301428 out.go:252] - Booting up control plane ...
I1202 21:05:03.696778 301428 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1202 21:05:03.696851 301428 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1202 21:05:03.697156 301428 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1202 21:05:03.717699 301428 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1202 21:05:03.717975 301428 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1202 21:05:03.725544 301428 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1202 21:05:03.727090 301428 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1202 21:05:03.727136 301428 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1202 21:05:03.867497 301428 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1202 21:05:03.867603 301428 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1202 21:09:03.867207 301428 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000050144s
I1202 21:09:03.867233 301428 kubeadm.go:319]
I1202 21:09:03.867302 301428 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1202 21:09:03.867334 301428 kubeadm.go:319] - The kubelet is not running
I1202 21:09:03.867456 301428 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1202 21:09:03.867484 301428 kubeadm.go:319]
I1202 21:09:03.867622 301428 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1202 21:09:03.867665 301428 kubeadm.go:319] - 'systemctl status kubelet'
I1202 21:09:03.867706 301428 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1202 21:09:03.867710 301428 kubeadm.go:319]
I1202 21:09:03.872601 301428 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1202 21:09:03.873011 301428 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1202 21:09:03.873114 301428 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1202 21:09:03.873366 301428 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1202 21:09:03.873370 301428 kubeadm.go:319]
I1202 21:09:03.873434 301428 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1202 21:09:03.873486 301428 kubeadm.go:403] duration metric: took 8m7.154111325s to StartCluster
I1202 21:09:03.873516 301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1202 21:09:03.873575 301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1202 21:09:03.898971 301428 cri.go:89] found id: ""
I1202 21:09:03.898986 301428 logs.go:282] 0 containers: []
W1202 21:09:03.898993 301428 logs.go:284] No container was found matching "kube-apiserver"
I1202 21:09:03.898999 301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1202 21:09:03.899062 301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1202 21:09:03.923416 301428 cri.go:89] found id: ""
I1202 21:09:03.923429 301428 logs.go:282] 0 containers: []
W1202 21:09:03.923436 301428 logs.go:284] No container was found matching "etcd"
I1202 21:09:03.923442 301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1202 21:09:03.923504 301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1202 21:09:03.950688 301428 cri.go:89] found id: ""
I1202 21:09:03.950701 301428 logs.go:282] 0 containers: []
W1202 21:09:03.950708 301428 logs.go:284] No container was found matching "coredns"
I1202 21:09:03.950713 301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1202 21:09:03.950769 301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1202 21:09:03.974669 301428 cri.go:89] found id: ""
I1202 21:09:03.974682 301428 logs.go:282] 0 containers: []
W1202 21:09:03.974689 301428 logs.go:284] No container was found matching "kube-scheduler"
I1202 21:09:03.974694 301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1202 21:09:03.974752 301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1202 21:09:04.001357 301428 cri.go:89] found id: ""
I1202 21:09:04.001369 301428 logs.go:282] 0 containers: []
W1202 21:09:04.001377 301428 logs.go:284] No container was found matching "kube-proxy"
I1202 21:09:04.001382 301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1202 21:09:04.001440 301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1202 21:09:04.032461 301428 cri.go:89] found id: ""
I1202 21:09:04.032475 301428 logs.go:282] 0 containers: []
W1202 21:09:04.032482 301428 logs.go:284] No container was found matching "kube-controller-manager"
I1202 21:09:04.032488 301428 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1202 21:09:04.032548 301428 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1202 21:09:04.057327 301428 cri.go:89] found id: ""
I1202 21:09:04.057342 301428 logs.go:282] 0 containers: []
W1202 21:09:04.057351 301428 logs.go:284] No container was found matching "kindnet"
I1202 21:09:04.057363 301428 logs.go:123] Gathering logs for kubelet ...
I1202 21:09:04.057374 301428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1202 21:09:04.113692 301428 logs.go:123] Gathering logs for dmesg ...
I1202 21:09:04.113711 301428 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1202 21:09:04.130702 301428 logs.go:123] Gathering logs for describe nodes ...
I1202 21:09:04.130719 301428 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1202 21:09:04.194124 301428 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1202 21:09:04.185822 5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 21:09:04.186468 5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 21:09:04.188330 5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 21:09:04.188878 5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 21:09:04.190600 5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1202 21:09:04.185822 5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 21:09:04.186468 5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 21:09:04.188330 5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 21:09:04.188878 5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 21:09:04.190600 5374 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1202 21:09:04.194135 301428 logs.go:123] Gathering logs for containerd ...
I1202 21:09:04.194146 301428 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1202 21:09:04.235613 301428 logs.go:123] Gathering logs for container status ...
I1202 21:09:04.235631 301428 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1202 21:09:04.265763 301428 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000050144s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1202 21:09:04.265805 301428 out.go:285] *
W1202 21:09:04.265903 301428 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000050144s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1202 21:09:04.265959 301428 out.go:285] *
W1202 21:09:04.268308 301428 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1202 21:09:04.273804 301428 out.go:203]
W1202 21:09:04.276846 301428 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000050144s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1202 21:09:04.276896 301428 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1202 21:09:04.276915 301428 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1202 21:09:04.280035 301428 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 02 21:00:48 functional-753958 containerd[764]: time="2025-12-02T21:00:48.897231078Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 21:00:49 functional-753958 containerd[764]: time="2025-12-02T21:00:49.901335518Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
Dec 02 21:00:49 functional-753958 containerd[764]: time="2025-12-02T21:00:49.903567046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
Dec 02 21:00:49 functional-753958 containerd[764]: time="2025-12-02T21:00:49.916377354Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 21:00:49 functional-753958 containerd[764]: time="2025-12-02T21:00:49.919108962Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 21:00:50 functional-753958 containerd[764]: time="2025-12-02T21:00:50.824755953Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
Dec 02 21:00:50 functional-753958 containerd[764]: time="2025-12-02T21:00:50.826934747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
Dec 02 21:00:50 functional-753958 containerd[764]: time="2025-12-02T21:00:50.834959114Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 21:00:50 functional-753958 containerd[764]: time="2025-12-02T21:00:50.836081183Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 21:00:51 functional-753958 containerd[764]: time="2025-12-02T21:00:51.819127594Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
Dec 02 21:00:51 functional-753958 containerd[764]: time="2025-12-02T21:00:51.821512398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
Dec 02 21:00:51 functional-753958 containerd[764]: time="2025-12-02T21:00:51.829079013Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 21:00:51 functional-753958 containerd[764]: time="2025-12-02T21:00:51.830189890Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 21:00:52 functional-753958 containerd[764]: time="2025-12-02T21:00:52.765584481Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
Dec 02 21:00:52 functional-753958 containerd[764]: time="2025-12-02T21:00:52.768055985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
Dec 02 21:00:52 functional-753958 containerd[764]: time="2025-12-02T21:00:52.775070804Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 21:00:52 functional-753958 containerd[764]: time="2025-12-02T21:00:52.775767392Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.130296524Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.133244134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.143819863Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.144469166Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.517690981Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.520005025Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.526740402Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 21:00:54 functional-753958 containerd[764]: time="2025-12-02T21:00:54.527072085Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1202 21:09:05.310941 5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 21:09:05.311843 5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 21:09:05.312869 5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 21:09:05.314693 5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 21:09:05.315282 5496 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec 2 20:15] overlayfs: idmapped layers are currently not supported
[ +4.361228] overlayfs: idmapped layers are currently not supported
[Dec 2 20:16] overlayfs: idmapped layers are currently not supported
[ +18.795347] overlayfs: idmapped layers are currently not supported
[Dec 2 20:17] overlayfs: idmapped layers are currently not supported
[ +25.695902] overlayfs: idmapped layers are currently not supported
[Dec 2 20:19] overlayfs: idmapped layers are currently not supported
[Dec 2 20:20] overlayfs: idmapped layers are currently not supported
[Dec 2 20:22] overlayfs: idmapped layers are currently not supported
[Dec 2 20:23] overlayfs: idmapped layers are currently not supported
[Dec 2 20:24] overlayfs: idmapped layers are currently not supported
[Dec 2 20:31] overlayfs: idmapped layers are currently not supported
[Dec 2 20:33] overlayfs: idmapped layers are currently not supported
[ +46.801539] overlayfs: idmapped layers are currently not supported
[Dec 2 20:34] overlayfs: idmapped layers are currently not supported
[Dec 2 20:36] overlayfs: idmapped layers are currently not supported
[Dec 2 20:37] overlayfs: idmapped layers are currently not supported
[Dec 2 20:38] overlayfs: idmapped layers are currently not supported
[ +9.909087] overlayfs: idmapped layers are currently not supported
[Dec 2 20:40] overlayfs: idmapped layers are currently not supported
[ +11.331274] overlayfs: idmapped layers are currently not supported
[Dec 2 20:41] overlayfs: idmapped layers are currently not supported
[ +30.586994] overlayfs: idmapped layers are currently not supported
[Dec 2 20:43] overlayfs: idmapped layers are currently not supported
[Dec 2 20:49] kauditd_printk_skb: 8 callbacks suppressed
==> kernel <==
21:09:05 up 2:51, 0 user, load average: 0.43, 0.58, 1.19
Linux functional-753958 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 02 21:09:02 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 02 21:09:02 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 02 21:09:02 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 21:09:02 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 21:09:02 functional-753958 kubelet[5299]: E1202 21:09:02.842421 5299 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 02 21:09:02 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 21:09:02 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 02 21:09:03 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 02 21:09:03 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 21:09:03 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 21:09:03 functional-753958 kubelet[5305]: E1202 21:09:03.585220 5305 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 02 21:09:03 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 21:09:03 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 02 21:09:04 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 02 21:09:04 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 21:09:04 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 21:09:04 functional-753958 kubelet[5393]: E1202 21:09:04.377277 5393 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 02 21:09:04 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 21:09:04 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 02 21:09:05 functional-753958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 02 21:09:05 functional-753958 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 21:09:05 functional-753958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 21:09:05 functional-753958 kubelet[5442]: E1202 21:09:05.102549 5442 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 02 21:09:05 functional-753958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 21:09:05 functional-753958 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-753958 -n functional-753958: exit status 6 (353.403787ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1202 21:09:05.799610 307652 status.go:458] kubeconfig endpoint: get endpoint: "functional-753958" does not appear in /home/jenkins/minikube-integration/21997-261381/kubeconfig
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-753958" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (507.59s)