=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-449836 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1202 19:00:46.066361 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/addons-932514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:01:13.770098 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/addons-932514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:00.706193 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:00.712567 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:00.723916 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:00.745309 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:00.786769 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:00.868248 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:01.029766 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:01.351597 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:01.993712 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:03.275962 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:05.837456 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:10.958960 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:21.200374 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:03:41.681835 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:04:22.645119 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:05:44.569750 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-224594/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:05:46.065858 4435 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/addons-932514/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-449836 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m26.204073912s)
-- stdout --
* [functional-449836] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22021
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22021-2487/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2487/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-449836" primary control-plane node in "functional-449836" cluster
* Pulling base image v0.0.48-1764169655-21974 ...
* Found network options:
- HTTP_PROXY=localhost:41973
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:41973 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-449836 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-449836 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000911274s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001152361s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001152361s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-449836 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-449836
helpers_test.go:243: (dbg) docker inspect functional-449836:
-- stdout --
[
{
"Id": "6870f21b62bb6903aca3129f1ce4723cf3f2ffad99a50b164f8e2dc04b50e75d",
"Created": "2025-12-02T18:58:53.515075222Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 43093,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-02T18:58:53.587847975Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
"ResolvConfPath": "/var/lib/docker/containers/6870f21b62bb6903aca3129f1ce4723cf3f2ffad99a50b164f8e2dc04b50e75d/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/6870f21b62bb6903aca3129f1ce4723cf3f2ffad99a50b164f8e2dc04b50e75d/hostname",
"HostsPath": "/var/lib/docker/containers/6870f21b62bb6903aca3129f1ce4723cf3f2ffad99a50b164f8e2dc04b50e75d/hosts",
"LogPath": "/var/lib/docker/containers/6870f21b62bb6903aca3129f1ce4723cf3f2ffad99a50b164f8e2dc04b50e75d/6870f21b62bb6903aca3129f1ce4723cf3f2ffad99a50b164f8e2dc04b50e75d-json.log",
"Name": "/functional-449836",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-449836:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-449836",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "6870f21b62bb6903aca3129f1ce4723cf3f2ffad99a50b164f8e2dc04b50e75d",
"LowerDir": "/var/lib/docker/overlay2/41ea5a18fbf8709879a7fc4066a6a4a1474aa86e898ee1ccabe5669a1871131d-init/diff:/var/lib/docker/overlay2/a59c61675ee48e07a7f4a8725bd393449453344ad8907963779ea1c0059d936c/diff",
"MergedDir": "/var/lib/docker/overlay2/41ea5a18fbf8709879a7fc4066a6a4a1474aa86e898ee1ccabe5669a1871131d/merged",
"UpperDir": "/var/lib/docker/overlay2/41ea5a18fbf8709879a7fc4066a6a4a1474aa86e898ee1ccabe5669a1871131d/diff",
"WorkDir": "/var/lib/docker/overlay2/41ea5a18fbf8709879a7fc4066a6a4a1474aa86e898ee1ccabe5669a1871131d/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-449836",
"Source": "/var/lib/docker/volumes/functional-449836/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-449836",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-449836",
"name.minikube.sigs.k8s.io": "functional-449836",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "410fbaab809a56b556195f7ed6eeff8dcd31e9020fb1dbfacf74828b79df3d88",
"SandboxKey": "/var/run/docker/netns/410fbaab809a",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32788"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32789"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32792"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32790"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32791"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-449836": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "82:18:11:ce:46:48",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "3cb7d67d4fa267ddd6b37211325b224fb3fb811be8ff57bda18e19f6929ec9c8",
"EndpointID": "20c8c1a67e53d7615656777f73986a40cb1c6affb22c4db185c479ac85cbdb14",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-449836",
"6870f21b62bb"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-449836 -n functional-449836
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-449836 -n functional-449836: exit status 6 (338.030274ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1202 19:07:18.779739 48797 status.go:458] kubeconfig endpoint: get endpoint: "functional-449836" does not appear in /home/jenkins/minikube-integration/22021-2487/kubeconfig
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p functional-449836 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-224594 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ ssh │ functional-224594 ssh sudo cat /etc/ssl/certs/44352.pem │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image load --daemon kicbase/echo-server:functional-224594 --alsologtostderr │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ ssh │ functional-224594 ssh sudo cat /usr/share/ca-certificates/44352.pem │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ ssh │ functional-224594 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image ls │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ ssh │ functional-224594 ssh sudo cat /etc/test/nested/copy/4435/hosts │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image save kicbase/echo-server:functional-224594 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image rm kicbase/echo-server:functional-224594 --alsologtostderr │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image ls │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image ls │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ update-context │ functional-224594 update-context --alsologtostderr -v=2 │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image save --daemon kicbase/echo-server:functional-224594 --alsologtostderr │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ update-context │ functional-224594 update-context --alsologtostderr -v=2 │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ update-context │ functional-224594 update-context --alsologtostderr -v=2 │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image ls --format short --alsologtostderr │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image ls --format yaml --alsologtostderr │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ ssh │ functional-224594 ssh pgrep buildkitd │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ │
│ image │ functional-224594 image ls --format json --alsologtostderr │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image ls --format table --alsologtostderr │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image build -t localhost/my-image:functional-224594 testdata/build --alsologtostderr │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ image │ functional-224594 image ls │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ delete │ -p functional-224594 │ functional-224594 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ 02 Dec 25 18:58 UTC │
│ start │ -p functional-449836 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-449836 │ jenkins │ v1.37.0 │ 02 Dec 25 18:58 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/02 18:58:52
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.25.3 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1202 18:58:52.273912 42791 out.go:360] Setting OutFile to fd 1 ...
I1202 18:58:52.274090 42791 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 18:58:52.274094 42791 out.go:374] Setting ErrFile to fd 2...
I1202 18:58:52.274098 42791 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 18:58:52.274358 42791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22021-2487/.minikube/bin
I1202 18:58:52.274761 42791 out.go:368] Setting JSON to false
I1202 18:58:52.275597 42791 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2469,"bootTime":1764699464,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1202 18:58:52.275653 42791 start.go:143] virtualization:
I1202 18:58:52.282673 42791 out.go:179] * [functional-449836] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1202 18:58:52.286341 42791 out.go:179] - MINIKUBE_LOCATION=22021
I1202 18:58:52.286437 42791 notify.go:221] Checking for updates...
I1202 18:58:52.293348 42791 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1202 18:58:52.296563 42791 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22021-2487/kubeconfig
I1202 18:58:52.299657 42791 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22021-2487/.minikube
I1202 18:58:52.302656 42791 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1202 18:58:52.305714 42791 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1202 18:58:52.309055 42791 driver.go:422] Setting default libvirt URI to qemu:///system
I1202 18:58:52.330835 42791 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1202 18:58:52.330951 42791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 18:58:52.400766 42791 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 18:58:52.390664424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 18:58:52.400856 42791 docker.go:319] overlay module found
I1202 18:58:52.406278 42791 out.go:179] * Using the docker driver based on user configuration
I1202 18:58:52.409219 42791 start.go:309] selected driver: docker
I1202 18:58:52.409230 42791 start.go:927] validating driver "docker" against <nil>
I1202 18:58:52.409241 42791 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1202 18:58:52.409987 42791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1202 18:58:52.462910 42791 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-02 18:58:52.454023149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1202 18:58:52.463067 42791 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1202 18:58:52.463284 42791 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1202 18:58:52.466422 42791 out.go:179] * Using Docker driver with root privileges
I1202 18:58:52.469262 42791 cni.go:84] Creating CNI manager for ""
I1202 18:58:52.469324 42791 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1202 18:58:52.469331 42791 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1202 18:58:52.469405 42791 start.go:353] cluster config:
{Name:functional-449836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-449836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1202 18:58:52.472637 42791 out.go:179] * Starting "functional-449836" primary control-plane node in "functional-449836" cluster
I1202 18:58:52.475539 42791 cache.go:134] Beginning downloading kic base image for docker with containerd
I1202 18:58:52.478433 42791 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
I1202 18:58:52.481328 42791 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
I1202 18:58:52.481478 42791 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1202 18:58:52.501368 42791 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
I1202 18:58:52.501379 42791 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
W1202 18:58:52.546972 42791 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
W1202 18:58:52.726074 42791 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
I1202 18:58:52.726240 42791 cache.go:107] acquiring lock: {Name:mkb3ffc95e4b7ac3756206049d851bf516a8abb7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 18:58:52.726338 42791 cache.go:115] /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1202 18:58:52.726347 42791 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.41µs
I1202 18:58:52.726360 42791 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1202 18:58:52.726370 42791 cache.go:107] acquiring lock: {Name:mkfa39bba55c97fa80e441f8dcbaf6dc6a2ab6fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 18:58:52.726398 42791 cache.go:115] /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
I1202 18:58:52.726409 42791 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 33.551µs
I1202 18:58:52.726415 42791 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
I1202 18:58:52.726423 42791 cache.go:107] acquiring lock: {Name:mk7e3720bc30e96a70479f1acc707ef52791d566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 18:58:52.726433 42791 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/config.json ...
I1202 18:58:52.726451 42791 cache.go:115] /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
I1202 18:58:52.726455 42791 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 33.124µs
I1202 18:58:52.726460 42791 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
I1202 18:58:52.726456 42791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/config.json: {Name:mk64bea15d4652689d28dddc7b023cf0d077a8b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 18:58:52.726469 42791 cache.go:107] acquiring lock: {Name:mk87fcb81abcb9216a37cb770c1db1797c0a7f91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 18:58:52.726547 42791 cache.go:115] /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
I1202 18:58:52.726551 42791 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 83.118µs
I1202 18:58:52.726556 42791 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
I1202 18:58:52.726563 42791 cache.go:107] acquiring lock: {Name:mkb0da8840651a370490ea2b46213e13fc0d5dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 18:58:52.726588 42791 cache.go:115] /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
I1202 18:58:52.726592 42791 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 29.383µs
I1202 18:58:52.726596 42791 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
I1202 18:58:52.726604 42791 cache.go:107] acquiring lock: {Name:mk9eec99a3e8e54b076a2ce506d08ceb8a7f49cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 18:58:52.726627 42791 cache.go:115] /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
I1202 18:58:52.726631 42791 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 28.291µs
I1202 18:58:52.726632 42791 cache.go:243] Successfully downloaded all kic artifacts
I1202 18:58:52.726635 42791 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
I1202 18:58:52.726643 42791 cache.go:107] acquiring lock: {Name:mk280b51a6d3bfe0cb60ae7355309f1bf1f99e1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 18:58:52.726648 42791 start.go:360] acquireMachinesLock for functional-449836: {Name:mk8999fdfa518fc15358d07431fe9bec286a035e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 18:58:52.726667 42791 cache.go:115] /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
I1202 18:58:52.726671 42791 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 29.112µs
I1202 18:58:52.726675 42791 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
I1202 18:58:52.726682 42791 cache.go:107] acquiring lock: {Name:mkbf2c8ea9fae755e8e7ae1c483527f313757bae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 18:58:52.726689 42791 start.go:364] duration metric: took 33.485µs to acquireMachinesLock for "functional-449836"
I1202 18:58:52.726706 42791 cache.go:115] /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
I1202 18:58:52.726709 42791 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 28.234µs
I1202 18:58:52.726713 42791 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
I1202 18:58:52.726720 42791 cache.go:87] Successfully saved all images to host disk.
I1202 18:58:52.726705 42791 start.go:93] Provisioning new machine with config: &{Name:functional-449836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-449836 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1202 18:58:52.726761 42791 start.go:125] createHost starting for "" (driver="docker")
I1202 18:58:52.731715 42791 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1202 18:58:52.731973 42791 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:41973 to docker env.
I1202 18:58:52.732040 42791 start.go:159] libmachine.API.Create for "functional-449836" (driver="docker")
I1202 18:58:52.732062 42791 client.go:173] LocalClient.Create starting
I1202 18:58:52.732160 42791 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-2487/.minikube/certs/ca.pem
I1202 18:58:52.732191 42791 main.go:143] libmachine: Decoding PEM data...
I1202 18:58:52.732211 42791 main.go:143] libmachine: Parsing certificate...
I1202 18:58:52.732255 42791 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22021-2487/.minikube/certs/cert.pem
I1202 18:58:52.732271 42791 main.go:143] libmachine: Decoding PEM data...
I1202 18:58:52.732281 42791 main.go:143] libmachine: Parsing certificate...
I1202 18:58:52.732658 42791 cli_runner.go:164] Run: docker network inspect functional-449836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1202 18:58:52.748895 42791 cli_runner.go:211] docker network inspect functional-449836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1202 18:58:52.748961 42791 network_create.go:284] running [docker network inspect functional-449836] to gather additional debugging logs...
I1202 18:58:52.748976 42791 cli_runner.go:164] Run: docker network inspect functional-449836
W1202 18:58:52.766301 42791 cli_runner.go:211] docker network inspect functional-449836 returned with exit code 1
I1202 18:58:52.766319 42791 network_create.go:287] error running [docker network inspect functional-449836]: docker network inspect functional-449836: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-449836 not found
I1202 18:58:52.766330 42791 network_create.go:289] output of [docker network inspect functional-449836]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-449836 not found
** /stderr **
I1202 18:58:52.766419 42791 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 18:58:52.783516 42791 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f3110}
I1202 18:58:52.783549 42791 network_create.go:124] attempt to create docker network functional-449836 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1202 18:58:52.783610 42791 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-449836 functional-449836
I1202 18:58:52.840600 42791 network_create.go:108] docker network functional-449836 192.168.49.0/24 created
I1202 18:58:52.840623 42791 kic.go:121] calculated static IP "192.168.49.2" for the "functional-449836" container
I1202 18:58:52.840716 42791 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1202 18:58:52.856411 42791 cli_runner.go:164] Run: docker volume create functional-449836 --label name.minikube.sigs.k8s.io=functional-449836 --label created_by.minikube.sigs.k8s.io=true
I1202 18:58:52.874845 42791 oci.go:103] Successfully created a docker volume functional-449836
I1202 18:58:52.874916 42791 cli_runner.go:164] Run: docker run --rm --name functional-449836-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-449836 --entrypoint /usr/bin/test -v functional-449836:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
I1202 18:58:53.430102 42791 oci.go:107] Successfully prepared a docker volume functional-449836
I1202 18:58:53.430167 42791 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
W1202 18:58:53.430314 42791 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1202 18:58:53.430424 42791 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1202 18:58:53.500097 42791 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-449836 --name functional-449836 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-449836 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-449836 --network functional-449836 --ip 192.168.49.2 --volume functional-449836:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
I1202 18:58:53.804471 42791 cli_runner.go:164] Run: docker container inspect functional-449836 --format={{.State.Running}}
I1202 18:58:53.828082 42791 cli_runner.go:164] Run: docker container inspect functional-449836 --format={{.State.Status}}
I1202 18:58:53.853234 42791 cli_runner.go:164] Run: docker exec functional-449836 stat /var/lib/dpkg/alternatives/iptables
I1202 18:58:53.905695 42791 oci.go:144] the created container "functional-449836" has a running status.
I1202 18:58:53.905717 42791 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22021-2487/.minikube/machines/functional-449836/id_rsa...
I1202 18:58:54.847185 42791 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22021-2487/.minikube/machines/functional-449836/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1202 18:58:54.867784 42791 cli_runner.go:164] Run: docker container inspect functional-449836 --format={{.State.Status}}
I1202 18:58:54.885541 42791 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1202 18:58:54.885552 42791 kic_runner.go:114] Args: [docker exec --privileged functional-449836 chown docker:docker /home/docker/.ssh/authorized_keys]
I1202 18:58:54.927203 42791 cli_runner.go:164] Run: docker container inspect functional-449836 --format={{.State.Status}}
I1202 18:58:54.945323 42791 machine.go:94] provisionDockerMachine start ...
I1202 18:58:54.945432 42791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-449836
I1202 18:58:54.963475 42791 main.go:143] libmachine: Using SSH client type: native
I1202 18:58:54.963807 42791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1202 18:58:54.963813 42791 main.go:143] libmachine: About to run SSH command:
hostname
I1202 18:58:54.964491 42791 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1202 18:58:58.116148 42791 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-449836
I1202 18:58:58.116162 42791 ubuntu.go:182] provisioning hostname "functional-449836"
I1202 18:58:58.116225 42791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-449836
I1202 18:58:58.134154 42791 main.go:143] libmachine: Using SSH client type: native
I1202 18:58:58.134451 42791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1202 18:58:58.134459 42791 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-449836 && echo "functional-449836" | sudo tee /etc/hostname
I1202 18:58:58.289274 42791 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-449836
I1202 18:58:58.289343 42791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-449836
I1202 18:58:58.307705 42791 main.go:143] libmachine: Using SSH client type: native
I1202 18:58:58.308012 42791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1202 18:58:58.308025 42791 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-449836' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-449836/g' /etc/hosts;
else
echo '127.0.1.1 functional-449836' | sudo tee -a /etc/hosts;
fi
fi
I1202 18:58:58.456411 42791 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1202 18:58:58.456428 42791 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22021-2487/.minikube CaCertPath:/home/jenkins/minikube-integration/22021-2487/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22021-2487/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22021-2487/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22021-2487/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22021-2487/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22021-2487/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22021-2487/.minikube}
I1202 18:58:58.456455 42791 ubuntu.go:190] setting up certificates
I1202 18:58:58.456463 42791 provision.go:84] configureAuth start
I1202 18:58:58.456536 42791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-449836
I1202 18:58:58.474527 42791 provision.go:143] copyHostCerts
I1202 18:58:58.474586 42791 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2487/.minikube/ca.pem, removing ...
I1202 18:58:58.474598 42791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2487/.minikube/ca.pem
I1202 18:58:58.474676 42791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2487/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22021-2487/.minikube/ca.pem (1082 bytes)
I1202 18:58:58.474771 42791 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2487/.minikube/cert.pem, removing ...
I1202 18:58:58.474775 42791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2487/.minikube/cert.pem
I1202 18:58:58.474798 42791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2487/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22021-2487/.minikube/cert.pem (1123 bytes)
I1202 18:58:58.474872 42791 exec_runner.go:144] found /home/jenkins/minikube-integration/22021-2487/.minikube/key.pem, removing ...
I1202 18:58:58.474876 42791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22021-2487/.minikube/key.pem
I1202 18:58:58.474897 42791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22021-2487/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22021-2487/.minikube/key.pem (1675 bytes)
I1202 18:58:58.474940 42791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22021-2487/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22021-2487/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22021-2487/.minikube/certs/ca-key.pem org=jenkins.functional-449836 san=[127.0.0.1 192.168.49.2 functional-449836 localhost minikube]
I1202 18:58:58.650878 42791 provision.go:177] copyRemoteCerts
I1202 18:58:58.650932 42791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1202 18:58:58.650970 42791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-449836
I1202 18:58:58.673717 42791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22021-2487/.minikube/machines/functional-449836/id_rsa Username:docker}
I1202 18:58:58.776275 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1202 18:58:58.794520 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1202 18:58:58.812473 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1202 18:58:58.830080 42791 provision.go:87] duration metric: took 373.595013ms to configureAuth
I1202 18:58:58.830097 42791 ubuntu.go:206] setting minikube options for container-runtime
I1202 18:58:58.830287 42791 config.go:182] Loaded profile config "functional-449836": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1202 18:58:58.830292 42791 machine.go:97] duration metric: took 3.88495686s to provisionDockerMachine
I1202 18:58:58.830298 42791 client.go:176] duration metric: took 6.098232424s to LocalClient.Create
I1202 18:58:58.830311 42791 start.go:167] duration metric: took 6.09827249s to libmachine.API.Create "functional-449836"
I1202 18:58:58.830316 42791 start.go:293] postStartSetup for "functional-449836" (driver="docker")
I1202 18:58:58.830326 42791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1202 18:58:58.830373 42791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1202 18:58:58.830418 42791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-449836
I1202 18:58:58.849514 42791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22021-2487/.minikube/machines/functional-449836/id_rsa Username:docker}
I1202 18:58:58.956256 42791 ssh_runner.go:195] Run: cat /etc/os-release
I1202 18:58:58.959670 42791 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1202 18:58:58.959687 42791 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1202 18:58:58.959696 42791 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2487/.minikube/addons for local assets ...
I1202 18:58:58.959756 42791 filesync.go:126] Scanning /home/jenkins/minikube-integration/22021-2487/.minikube/files for local assets ...
I1202 18:58:58.959842 42791 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2487/.minikube/files/etc/ssl/certs/44352.pem -> 44352.pem in /etc/ssl/certs
I1202 18:58:58.959915 42791 filesync.go:149] local asset: /home/jenkins/minikube-integration/22021-2487/.minikube/files/etc/test/nested/copy/4435/hosts -> hosts in /etc/test/nested/copy/4435
I1202 18:58:58.959959 42791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4435
I1202 18:58:58.967908 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/files/etc/ssl/certs/44352.pem --> /etc/ssl/certs/44352.pem (1708 bytes)
I1202 18:58:58.987264 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/files/etc/test/nested/copy/4435/hosts --> /etc/test/nested/copy/4435/hosts (40 bytes)
I1202 18:58:59.004178 42791 start.go:296] duration metric: took 173.849302ms for postStartSetup
I1202 18:58:59.004555 42791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-449836
I1202 18:58:59.021952 42791 profile.go:143] Saving config to /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/config.json ...
I1202 18:58:59.022233 42791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1202 18:58:59.022275 42791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-449836
I1202 18:58:59.039993 42791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22021-2487/.minikube/machines/functional-449836/id_rsa Username:docker}
I1202 18:58:59.140891 42791 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1202 18:58:59.145197 42791 start.go:128] duration metric: took 6.418423516s to createHost
I1202 18:58:59.145212 42791 start.go:83] releasing machines lock for "functional-449836", held for 6.41851662s
I1202 18:58:59.145276 42791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-449836
I1202 18:58:59.166136 42791 out.go:179] * Found network options:
I1202 18:58:59.169013 42791 out.go:179] - HTTP_PROXY=localhost:41973
W1202 18:58:59.171845 42791 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1202 18:58:59.174600 42791 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1202 18:58:59.177463 42791 ssh_runner.go:195] Run: cat /version.json
I1202 18:58:59.177509 42791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-449836
I1202 18:58:59.177526 42791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1202 18:58:59.177588 42791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-449836
I1202 18:58:59.195418 42791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22021-2487/.minikube/machines/functional-449836/id_rsa Username:docker}
I1202 18:58:59.205926 42791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22021-2487/.minikube/machines/functional-449836/id_rsa Username:docker}
I1202 18:58:59.381034 42791 ssh_runner.go:195] Run: systemctl --version
I1202 18:58:59.387462 42791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1202 18:58:59.391569 42791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1202 18:58:59.391631 42791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1202 18:58:59.420373 42791 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1202 18:58:59.420386 42791 start.go:496] detecting cgroup driver to use...
I1202 18:58:59.420417 42791 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1202 18:58:59.420479 42791 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1202 18:58:59.435229 42791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1202 18:58:59.448969 42791 docker.go:218] disabling cri-docker service (if available) ...
I1202 18:58:59.449021 42791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1202 18:58:59.466577 42791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1202 18:58:59.484976 42791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1202 18:58:59.596402 42791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1202 18:58:59.722993 42791 docker.go:234] disabling docker service ...
I1202 18:58:59.723080 42791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1202 18:58:59.744938 42791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1202 18:58:59.757997 42791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1202 18:58:59.872052 42791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1202 18:59:00.000506 42791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1202 18:59:00.061711 42791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1202 18:59:00.115661 42791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1202 18:59:00.136188 42791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1202 18:59:00.181662 42791 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1202 18:59:00.181736 42791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1202 18:59:00.217867 42791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1202 18:59:00.230431 42791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1202 18:59:00.253271 42791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1202 18:59:00.267908 42791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1202 18:59:00.282430 42791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1202 18:59:00.307530 42791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1202 18:59:00.319002 42791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1202 18:59:00.331376 42791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1202 18:59:00.341925 42791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1202 18:59:00.351412 42791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1202 18:59:00.485623 42791 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1202 18:59:00.575139 42791 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1202 18:59:00.575198 42791 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1202 18:59:00.579244 42791 start.go:564] Will wait 60s for crictl version
I1202 18:59:00.579308 42791 ssh_runner.go:195] Run: which crictl
I1202 18:59:00.583071 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1202 18:59:00.609320 42791 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.1.5
RuntimeApiVersion: v1
I1202 18:59:00.609377 42791 ssh_runner.go:195] Run: containerd --version
I1202 18:59:00.629873 42791 ssh_runner.go:195] Run: containerd --version
I1202 18:59:00.655344 42791 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
I1202 18:59:00.658158 42791 cli_runner.go:164] Run: docker network inspect functional-449836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 18:59:00.673966 42791 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1202 18:59:00.678164 42791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1202 18:59:00.688208 42791 kubeadm.go:884] updating cluster {Name:functional-449836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-449836 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1202 18:59:00.688307 42791 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1202 18:59:00.688387 42791 ssh_runner.go:195] Run: sudo crictl images --output json
I1202 18:59:00.712591 42791 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
I1202 18:59:00.712606 42791 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
I1202 18:59:00.712653 42791 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1202 18:59:00.712868 42791 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 18:59:00.712951 42791 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 18:59:00.713027 42791 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 18:59:00.713103 42791 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 18:59:00.713175 42791 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
I1202 18:59:00.713247 42791 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
I1202 18:59:00.713316 42791 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
I1202 18:59:00.715961 42791 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 18:59:00.716393 42791 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 18:59:00.716650 42791 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 18:59:00.716770 42791 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1202 18:59:00.716855 42791 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
I1202 18:59:00.717010 42791 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
I1202 18:59:00.717015 42791 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
I1202 18:59:00.717164 42791 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 18:59:01.056450 42791 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
I1202 18:59:01.056510 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 18:59:01.076156 42791 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
I1202 18:59:01.076224 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
I1202 18:59:01.076290 42791 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
I1202 18:59:01.076352 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 18:59:01.079907 42791 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
I1202 18:59:01.079940 42791 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 18:59:01.079987 42791 ssh_runner.go:195] Run: which crictl
I1202 18:59:01.085118 42791 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
I1202 18:59:01.085176 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 18:59:01.097878 42791 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
I1202 18:59:01.097941 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
I1202 18:59:01.113700 42791 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
I1202 18:59:01.113731 42791 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 18:59:01.113738 42791 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
I1202 18:59:01.113757 42791 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
I1202 18:59:01.113781 42791 ssh_runner.go:195] Run: which crictl
I1202 18:59:01.113793 42791 ssh_runner.go:195] Run: which crictl
I1202 18:59:01.113862 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 18:59:01.138908 42791 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
I1202 18:59:01.138939 42791 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 18:59:01.139013 42791 ssh_runner.go:195] Run: which crictl
I1202 18:59:01.139100 42791 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
I1202 18:59:01.139114 42791 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
I1202 18:59:01.139147 42791 ssh_runner.go:195] Run: which crictl
I1202 18:59:01.139223 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 18:59:01.139336 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
I1202 18:59:01.141027 42791 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
I1202 18:59:01.141124 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
I1202 18:59:01.157527 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 18:59:01.213944 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
I1202 18:59:01.213999 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 18:59:01.214017 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 18:59:01.214071 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1202 18:59:01.214100 42791 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
I1202 18:59:01.214122 42791 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
I1202 18:59:01.214151 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1202 18:59:01.214162 42791 ssh_runner.go:195] Run: which crictl
I1202 18:59:01.232524 42791 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
I1202 18:59:01.232592 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 18:59:01.293606 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1202 18:59:01.293681 42791 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
I1202 18:59:01.293766 42791 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
I1202 18:59:01.293817 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1202 18:59:01.293827 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 18:59:01.298772 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1202 18:59:01.298855 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
I1202 18:59:01.299517 42791 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
I1202 18:59:01.299544 42791 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 18:59:01.299603 42791 ssh_runner.go:195] Run: which crictl
I1202 18:59:01.384460 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1202 18:59:01.384485 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1202 18:59:01.384537 42791 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
I1202 18:59:01.384551 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
I1202 18:59:01.384580 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1202 18:59:01.384632 42791 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
I1202 18:59:01.384653 42791 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
I1202 18:59:01.384698 42791 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
I1202 18:59:01.384720 42791 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
I1202 18:59:01.384763 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 18:59:01.494028 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 18:59:01.494097 42791 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
I1202 18:59:01.494164 42791 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
I1202 18:59:01.494219 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1202 18:59:01.494257 42791 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
I1202 18:59:01.494295 42791 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
I1202 18:59:01.494345 42791 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
I1202 18:59:01.494360 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
I1202 18:59:01.494397 42791 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
I1202 18:59:01.494404 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
I1202 18:59:01.574725 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1202 18:59:01.574780 42791 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
I1202 18:59:01.574793 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
I1202 18:59:01.574844 42791 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
I1202 18:59:01.574897 42791 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
I1202 18:59:01.574910 42791 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
I1202 18:59:01.574916 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
I1202 18:59:01.679697 42791 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
I1202 18:59:01.679725 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
I1202 18:59:01.679802 42791 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
I1202 18:59:01.679883 42791 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
I1202 18:59:01.754104 42791 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
I1202 18:59:01.754156 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
I1202 18:59:01.810610 42791 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
I1202 18:59:01.810672 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
W1202 18:59:02.103750 42791 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
I1202 18:59:02.103872 42791 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
I1202 18:59:02.103935 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
I1202 18:59:02.162822 42791 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
I1202 18:59:02.162847 42791 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
I1202 18:59:02.162909 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
I1202 18:59:02.181523 42791 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
I1202 18:59:02.181553 42791 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1202 18:59:02.181619 42791 ssh_runner.go:195] Run: which crictl
I1202 18:59:03.531560 42791 ssh_runner.go:235] Completed: which crictl: (1.349924164s)
I1202 18:59:03.531616 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1202 18:59:03.531664 42791 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.368744999s)
I1202 18:59:03.531673 42791 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
I1202 18:59:03.531688 42791 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
I1202 18:59:03.531723 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
I1202 18:59:04.927028 42791 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.395284101s)
I1202 18:59:04.927045 42791 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
I1202 18:59:04.927099 42791 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.395473229s)
I1202 18:59:04.927165 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1202 18:59:04.927224 42791 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
I1202 18:59:04.927249 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
I1202 18:59:04.954933 42791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1202 18:59:05.961324 42791 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.034055749s)
I1202 18:59:05.961340 42791 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
I1202 18:59:05.961355 42791 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
I1202 18:59:05.961402 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
I1202 18:59:05.961472 42791 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.006527232s)
I1202 18:59:05.961493 42791 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
I1202 18:59:05.961555 42791 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I1202 18:59:06.930859 42791 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
I1202 18:59:06.930894 42791 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
I1202 18:59:06.930946 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
I1202 18:59:06.930963 42791 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I1202 18:59:06.930995 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
I1202 18:59:07.724713 42791 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
I1202 18:59:07.724736 42791 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
I1202 18:59:07.724790 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
I1202 18:59:08.786365 42791 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.061552643s)
I1202 18:59:08.786390 42791 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
I1202 18:59:08.786416 42791 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1202 18:59:08.786462 42791 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1202 18:59:09.144197 42791 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22021-2487/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1202 18:59:09.144221 42791 cache_images.go:125] Successfully loaded all cached images
I1202 18:59:09.144225 42791 cache_images.go:94] duration metric: took 8.431607178s to LoadCachedImages
I1202 18:59:09.144237 42791 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1202 18:59:09.144381 42791 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-449836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-449836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1202 18:59:09.144465 42791 ssh_runner.go:195] Run: sudo crictl info
I1202 18:59:09.170770 42791 cni.go:84] Creating CNI manager for ""
I1202 18:59:09.170778 42791 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1202 18:59:09.170795 42791 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1202 18:59:09.170816 42791 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-449836 NodeName:functional-449836 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1202 18:59:09.170919 42791 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-449836"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1202 18:59:09.170985 42791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1202 18:59:09.178776 42791 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
Initiating transfer...
I1202 18:59:09.178831 42791 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
I1202 18:59:09.186769 42791 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
I1202 18:59:09.186851 42791 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
I1202 18:59:09.186938 42791 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
I1202 18:59:09.186967 42791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1202 18:59:09.187047 42791 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
I1202 18:59:09.187095 42791 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
I1202 18:59:09.191795 42791 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
I1202 18:59:09.191822 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
I1202 18:59:09.206751 42791 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
I1202 18:59:09.207681 42791 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
I1202 18:59:09.207704 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
I1202 18:59:09.234713 42791 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
I1202 18:59:09.234743 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
I1202 18:59:09.961960 42791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1202 18:59:09.970944 42791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1202 18:59:09.989968 42791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1202 18:59:10.007101 42791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1202 18:59:10.023260 42791 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1202 18:59:10.027311 42791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1202 18:59:10.038571 42791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1202 18:59:10.151309 42791 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1202 18:59:10.173782 42791 certs.go:69] Setting up /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836 for IP: 192.168.49.2
I1202 18:59:10.173793 42791 certs.go:195] generating shared ca certs ...
I1202 18:59:10.173808 42791 certs.go:227] acquiring lock for ca certs: {Name:mk2ce7651a779b9fbf8eac798f9ac184328de0c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 18:59:10.173989 42791 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22021-2487/.minikube/ca.key
I1202 18:59:10.174028 42791 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22021-2487/.minikube/proxy-client-ca.key
I1202 18:59:10.174033 42791 certs.go:257] generating profile certs ...
I1202 18:59:10.174096 42791 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/client.key
I1202 18:59:10.174105 42791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/client.crt with IP's: []
I1202 18:59:10.350958 42791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/client.crt ...
I1202 18:59:10.350976 42791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/client.crt: {Name:mka3501c24c4a81a5cba9077ce4679d8fb7b6150 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 18:59:10.351202 42791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/client.key ...
I1202 18:59:10.351209 42791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/client.key: {Name:mk6e8ec4e49bef25d90791ec183ac3c189612a66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 18:59:10.351307 42791 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/apiserver.key.a65b71da
I1202 18:59:10.351320 42791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/apiserver.crt.a65b71da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1202 18:59:10.748305 42791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/apiserver.crt.a65b71da ...
I1202 18:59:10.748327 42791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/apiserver.crt.a65b71da: {Name:mk837ab465d52cf51c941e1272f64ca5e5bdcb78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 18:59:10.748521 42791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/apiserver.key.a65b71da ...
I1202 18:59:10.748528 42791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/apiserver.key.a65b71da: {Name:mkaa0675b09445ccc00059b025a1d7bd37b168f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 18:59:10.748613 42791 certs.go:382] copying /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/apiserver.crt.a65b71da -> /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/apiserver.crt
I1202 18:59:10.748687 42791 certs.go:386] copying /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/apiserver.key.a65b71da -> /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/apiserver.key
I1202 18:59:10.748738 42791 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/proxy-client.key
I1202 18:59:10.748752 42791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/proxy-client.crt with IP's: []
I1202 18:59:11.063670 42791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/proxy-client.crt ...
I1202 18:59:11.063686 42791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/proxy-client.crt: {Name:mk41c928bd4635bac6e4c433b422747dd3bec428 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 18:59:11.063882 42791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/proxy-client.key ...
I1202 18:59:11.063908 42791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/proxy-client.key: {Name:mk6c4cf48ed7d0213c83dacf9677f8c21ee7e130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 18:59:11.064120 42791 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2487/.minikube/certs/4435.pem (1338 bytes)
W1202 18:59:11.064163 42791 certs.go:480] ignoring /home/jenkins/minikube-integration/22021-2487/.minikube/certs/4435_empty.pem, impossibly tiny 0 bytes
I1202 18:59:11.064171 42791 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2487/.minikube/certs/ca-key.pem (1679 bytes)
I1202 18:59:11.064200 42791 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2487/.minikube/certs/ca.pem (1082 bytes)
I1202 18:59:11.064224 42791 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2487/.minikube/certs/cert.pem (1123 bytes)
I1202 18:59:11.064246 42791 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2487/.minikube/certs/key.pem (1675 bytes)
I1202 18:59:11.064290 42791 certs.go:484] found cert: /home/jenkins/minikube-integration/22021-2487/.minikube/files/etc/ssl/certs/44352.pem (1708 bytes)
I1202 18:59:11.064951 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1202 18:59:11.083980 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1202 18:59:11.103611 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1202 18:59:11.123391 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1202 18:59:11.142411 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1202 18:59:11.160627 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1202 18:59:11.178247 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1202 18:59:11.195905 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/profiles/functional-449836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1202 18:59:11.216207 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1202 18:59:11.235530 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/certs/4435.pem --> /usr/share/ca-certificates/4435.pem (1338 bytes)
I1202 18:59:11.253846 42791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22021-2487/.minikube/files/etc/ssl/certs/44352.pem --> /usr/share/ca-certificates/44352.pem (1708 bytes)
I1202 18:59:11.271165 42791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1202 18:59:11.284566 42791 ssh_runner.go:195] Run: openssl version
I1202 18:59:11.290675 42791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1202 18:59:11.298989 42791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1202 18:59:11.302714 42791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 2 18:48 /usr/share/ca-certificates/minikubeCA.pem
I1202 18:59:11.302768 42791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1202 18:59:11.349229 42791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1202 18:59:11.357787 42791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4435.pem && ln -fs /usr/share/ca-certificates/4435.pem /etc/ssl/certs/4435.pem"
I1202 18:59:11.366075 42791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4435.pem
I1202 18:59:11.370248 42791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 2 18:58 /usr/share/ca-certificates/4435.pem
I1202 18:59:11.370304 42791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4435.pem
I1202 18:59:11.411393 42791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4435.pem /etc/ssl/certs/51391683.0"
I1202 18:59:11.419953 42791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44352.pem && ln -fs /usr/share/ca-certificates/44352.pem /etc/ssl/certs/44352.pem"
I1202 18:59:11.428712 42791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44352.pem
I1202 18:59:11.432428 42791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 2 18:58 /usr/share/ca-certificates/44352.pem
I1202 18:59:11.432481 42791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44352.pem
I1202 18:59:11.474041 42791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44352.pem /etc/ssl/certs/3ec20f2e.0"
I1202 18:59:11.482618 42791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1202 18:59:11.486320 42791 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1202 18:59:11.486362 42791 kubeadm.go:401] StartCluster: {Name:functional-449836 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-449836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1202 18:59:11.486427 42791 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1202 18:59:11.486518 42791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1202 18:59:11.513005 42791 cri.go:89] found id: ""
I1202 18:59:11.513063 42791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1202 18:59:11.521036 42791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1202 18:59:11.529228 42791 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1202 18:59:11.529282 42791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1202 18:59:11.537318 42791 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1202 18:59:11.537327 42791 kubeadm.go:158] found existing configuration files:
I1202 18:59:11.537387 42791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1202 18:59:11.545218 42791 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1202 18:59:11.545285 42791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1202 18:59:11.553215 42791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1202 18:59:11.561314 42791 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1202 18:59:11.561368 42791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1202 18:59:11.570130 42791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1202 18:59:11.578285 42791 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1202 18:59:11.578373 42791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1202 18:59:11.585978 42791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1202 18:59:11.593813 42791 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1202 18:59:11.593885 42791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1202 18:59:11.601780 42791 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1202 18:59:11.643892 42791 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1202 18:59:11.644125 42791 kubeadm.go:319] [preflight] Running pre-flight checks
I1202 18:59:11.739484 42791 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1202 18:59:11.739547 42791 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1202 18:59:11.739597 42791 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1202 18:59:11.739641 42791 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1202 18:59:11.739688 42791 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1202 18:59:11.739733 42791 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1202 18:59:11.739780 42791 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1202 18:59:11.739827 42791 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1202 18:59:11.739873 42791 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1202 18:59:11.739917 42791 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1202 18:59:11.739963 42791 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1202 18:59:11.740008 42791 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1202 18:59:11.809299 42791 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1202 18:59:11.809420 42791 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1202 18:59:11.809521 42791 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1202 18:59:11.814586 42791 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1202 18:59:11.824092 42791 out.go:252] - Generating certificates and keys ...
I1202 18:59:11.824201 42791 kubeadm.go:319] [certs] Using existing ca certificate authority
I1202 18:59:11.824279 42791 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1202 18:59:11.970256 42791 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1202 18:59:12.248119 42791 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1202 18:59:12.556513 42791 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1202 18:59:12.623381 42791 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1202 18:59:12.726314 42791 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1202 18:59:12.726457 42791 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-449836 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1202 18:59:12.887892 42791 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1202 18:59:12.888313 42791 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-449836 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1202 18:59:13.069672 42791 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1202 18:59:13.685445 42791 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1202 18:59:14.118749 42791 kubeadm.go:319] [certs] Generating "sa" key and public key
I1202 18:59:14.118979 42791 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1202 18:59:14.342440 42791 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1202 18:59:14.713413 42791 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1202 18:59:15.152952 42791 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1202 18:59:15.551896 42791 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1202 18:59:15.713773 42791 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1202 18:59:15.714351 42791 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1202 18:59:15.717061 42791 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1202 18:59:15.748685 42791 out.go:252] - Booting up control plane ...
I1202 18:59:15.748787 42791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1202 18:59:15.748864 42791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1202 18:59:15.748929 42791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1202 18:59:15.749032 42791 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1202 18:59:15.749126 42791 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1202 18:59:15.751143 42791 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1202 18:59:15.751619 42791 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1202 18:59:15.751823 42791 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1202 18:59:15.891630 42791 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1202 18:59:15.891743 42791 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1202 19:03:15.892519 42791 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000911274s
I1202 19:03:15.892544 42791 kubeadm.go:319]
I1202 19:03:15.892602 42791 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1202 19:03:15.892632 42791 kubeadm.go:319] - The kubelet is not running
I1202 19:03:15.892731 42791 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1202 19:03:15.892736 42791 kubeadm.go:319]
I1202 19:03:15.892834 42791 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1202 19:03:15.892863 42791 kubeadm.go:319] - 'systemctl status kubelet'
I1202 19:03:15.892891 42791 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1202 19:03:15.892894 42791 kubeadm.go:319]
I1202 19:03:15.896498 42791 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1202 19:03:15.896940 42791 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1202 19:03:15.897049 42791 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1202 19:03:15.897307 42791 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1202 19:03:15.897323 42791 kubeadm.go:319]
I1202 19:03:15.897390 42791 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1202 19:03:15.897523 42791 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-449836 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-449836 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000911274s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1202 19:03:15.897619 42791 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1202 19:03:16.312004 42791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1202 19:03:16.325429 42791 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1202 19:03:16.325485 42791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1202 19:03:16.333446 42791 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1202 19:03:16.333454 42791 kubeadm.go:158] found existing configuration files:
I1202 19:03:16.333508 42791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1202 19:03:16.340969 42791 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1202 19:03:16.341023 42791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1202 19:03:16.348368 42791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1202 19:03:16.355741 42791 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1202 19:03:16.355794 42791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1202 19:03:16.363295 42791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1202 19:03:16.371331 42791 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1202 19:03:16.371385 42791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1202 19:03:16.378614 42791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1202 19:03:16.386080 42791 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1202 19:03:16.386133 42791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1202 19:03:16.393798 42791 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1202 19:03:16.510990 42791 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1202 19:03:16.511423 42791 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1202 19:03:16.578932 42791 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1202 19:07:18.007623 42791 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1202 19:07:18.007650 42791 kubeadm.go:319]
I1202 19:07:18.007723 42791 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1202 19:07:18.011332 42791 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1202 19:07:18.011398 42791 kubeadm.go:319] [preflight] Running pre-flight checks
I1202 19:07:18.011490 42791 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1202 19:07:18.011544 42791 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1202 19:07:18.011579 42791 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1202 19:07:18.011622 42791 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1202 19:07:18.011668 42791 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1202 19:07:18.011721 42791 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1202 19:07:18.011767 42791 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1202 19:07:18.011814 42791 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1202 19:07:18.011865 42791 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1202 19:07:18.011912 42791 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1202 19:07:18.011963 42791 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1202 19:07:18.012007 42791 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1202 19:07:18.012085 42791 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1202 19:07:18.012189 42791 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1202 19:07:18.012281 42791 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1202 19:07:18.012374 42791 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1202 19:07:18.015497 42791 out.go:252] - Generating certificates and keys ...
I1202 19:07:18.015625 42791 kubeadm.go:319] [certs] Using existing ca certificate authority
I1202 19:07:18.015707 42791 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1202 19:07:18.015817 42791 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1202 19:07:18.015902 42791 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1202 19:07:18.015978 42791 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1202 19:07:18.016033 42791 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1202 19:07:18.016096 42791 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1202 19:07:18.016156 42791 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1202 19:07:18.016265 42791 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1202 19:07:18.016368 42791 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1202 19:07:18.016406 42791 kubeadm.go:319] [certs] Using the existing "sa" key
I1202 19:07:18.016461 42791 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1202 19:07:18.016549 42791 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1202 19:07:18.016610 42791 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1202 19:07:18.016663 42791 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1202 19:07:18.016725 42791 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1202 19:07:18.016779 42791 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1202 19:07:18.016881 42791 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1202 19:07:18.016954 42791 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1202 19:07:18.020115 42791 out.go:252] - Booting up control plane ...
I1202 19:07:18.020248 42791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1202 19:07:18.020353 42791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1202 19:07:18.020418 42791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1202 19:07:18.020526 42791 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1202 19:07:18.020618 42791 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1202 19:07:18.020728 42791 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1202 19:07:18.020812 42791 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1202 19:07:18.020850 42791 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1202 19:07:18.020983 42791 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1202 19:07:18.021089 42791 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1202 19:07:18.021155 42791 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001152361s
I1202 19:07:18.021158 42791 kubeadm.go:319]
I1202 19:07:18.021213 42791 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1202 19:07:18.021249 42791 kubeadm.go:319] - The kubelet is not running
I1202 19:07:18.021353 42791 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1202 19:07:18.021356 42791 kubeadm.go:319]
I1202 19:07:18.021459 42791 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1202 19:07:18.021501 42791 kubeadm.go:319] - 'systemctl status kubelet'
I1202 19:07:18.021540 42791 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1202 19:07:18.021593 42791 kubeadm.go:319]
I1202 19:07:18.021604 42791 kubeadm.go:403] duration metric: took 8m6.535245053s to StartCluster
I1202 19:07:18.021640 42791 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1202 19:07:18.021704 42791 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1202 19:07:18.046204 42791 cri.go:89] found id: ""
I1202 19:07:18.046218 42791 logs.go:282] 0 containers: []
W1202 19:07:18.046226 42791 logs.go:284] No container was found matching "kube-apiserver"
I1202 19:07:18.046231 42791 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1202 19:07:18.046298 42791 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1202 19:07:18.074390 42791 cri.go:89] found id: ""
I1202 19:07:18.074404 42791 logs.go:282] 0 containers: []
W1202 19:07:18.074411 42791 logs.go:284] No container was found matching "etcd"
I1202 19:07:18.074417 42791 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1202 19:07:18.074480 42791 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1202 19:07:18.099129 42791 cri.go:89] found id: ""
I1202 19:07:18.099143 42791 logs.go:282] 0 containers: []
W1202 19:07:18.099150 42791 logs.go:284] No container was found matching "coredns"
I1202 19:07:18.099155 42791 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1202 19:07:18.099217 42791 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1202 19:07:18.124694 42791 cri.go:89] found id: ""
I1202 19:07:18.124715 42791 logs.go:282] 0 containers: []
W1202 19:07:18.124722 42791 logs.go:284] No container was found matching "kube-scheduler"
I1202 19:07:18.124728 42791 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1202 19:07:18.124790 42791 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1202 19:07:18.150965 42791 cri.go:89] found id: ""
I1202 19:07:18.150979 42791 logs.go:282] 0 containers: []
W1202 19:07:18.150986 42791 logs.go:284] No container was found matching "kube-proxy"
I1202 19:07:18.150991 42791 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1202 19:07:18.151053 42791 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1202 19:07:18.176236 42791 cri.go:89] found id: ""
I1202 19:07:18.176276 42791 logs.go:282] 0 containers: []
W1202 19:07:18.176283 42791 logs.go:284] No container was found matching "kube-controller-manager"
I1202 19:07:18.176295 42791 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1202 19:07:18.176372 42791 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1202 19:07:18.201073 42791 cri.go:89] found id: ""
I1202 19:07:18.201087 42791 logs.go:282] 0 containers: []
W1202 19:07:18.201094 42791 logs.go:284] No container was found matching "kindnet"
I1202 19:07:18.201102 42791 logs.go:123] Gathering logs for kubelet ...
I1202 19:07:18.201112 42791 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1202 19:07:18.256328 42791 logs.go:123] Gathering logs for dmesg ...
I1202 19:07:18.256346 42791 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1202 19:07:18.267255 42791 logs.go:123] Gathering logs for describe nodes ...
I1202 19:07:18.267270 42791 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1202 19:07:18.334547 42791 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1202 19:07:18.325181 5393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 19:07:18.326090 5393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 19:07:18.327868 5393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 19:07:18.328694 5393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 19:07:18.330416 5393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1202 19:07:18.325181 5393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 19:07:18.326090 5393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 19:07:18.327868 5393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 19:07:18.328694 5393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 19:07:18.330416 5393 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1202 19:07:18.334559 42791 logs.go:123] Gathering logs for containerd ...
I1202 19:07:18.334569 42791 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1202 19:07:18.377545 42791 logs.go:123] Gathering logs for container status ...
I1202 19:07:18.377562 42791 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1202 19:07:18.405738 42791 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001152361s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1202 19:07:18.405778 42791 out.go:285] *
W1202 19:07:18.405837 42791 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001152361s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1202 19:07:18.405850 42791 out.go:285] *
W1202 19:07:18.408205 42791 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1202 19:07:18.413536 42791 out.go:203]
W1202 19:07:18.416345 42791 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001152361s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1202 19:07:18.416393 42791 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1202 19:07:18.416416 42791 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1202 19:07:18.419487 42791 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 02 18:59:03 functional-449836 containerd[764]: time="2025-12-02T18:59:03.531066084Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 18:59:04 functional-449836 containerd[764]: time="2025-12-02T18:59:04.918633971Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
Dec 02 18:59:04 functional-449836 containerd[764]: time="2025-12-02T18:59:04.920818079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
Dec 02 18:59:04 functional-449836 containerd[764]: time="2025-12-02T18:59:04.934352889Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 18:59:04 functional-449836 containerd[764]: time="2025-12-02T18:59:04.935016922Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 18:59:05 functional-449836 containerd[764]: time="2025-12-02T18:59:05.950748344Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
Dec 02 18:59:05 functional-449836 containerd[764]: time="2025-12-02T18:59:05.952937269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
Dec 02 18:59:05 functional-449836 containerd[764]: time="2025-12-02T18:59:05.960084990Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 18:59:05 functional-449836 containerd[764]: time="2025-12-02T18:59:05.960411226Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 18:59:06 functional-449836 containerd[764]: time="2025-12-02T18:59:06.919972924Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
Dec 02 18:59:06 functional-449836 containerd[764]: time="2025-12-02T18:59:06.922088946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
Dec 02 18:59:06 functional-449836 containerd[764]: time="2025-12-02T18:59:06.941256244Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 18:59:06 functional-449836 containerd[764]: time="2025-12-02T18:59:06.942044051Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 18:59:07 functional-449836 containerd[764]: time="2025-12-02T18:59:07.715916014Z" level=info msg="No images store for sha256:84ea4651cf4d4486006d1346129c6964687be99508987d0ca606406fbc15a298"
Dec 02 18:59:07 functional-449836 containerd[764]: time="2025-12-02T18:59:07.718237115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\""
Dec 02 18:59:07 functional-449836 containerd[764]: time="2025-12-02T18:59:07.728198868Z" level=info msg="ImageCreate event name:\"sha256:16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 18:59:07 functional-449836 containerd[764]: time="2025-12-02T18:59:07.729109925Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 18:59:08 functional-449836 containerd[764]: time="2025-12-02T18:59:08.775930433Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
Dec 02 18:59:08 functional-449836 containerd[764]: time="2025-12-02T18:59:08.778191423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
Dec 02 18:59:08 functional-449836 containerd[764]: time="2025-12-02T18:59:08.787016306Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 18:59:08 functional-449836 containerd[764]: time="2025-12-02T18:59:08.787945119Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 18:59:09 functional-449836 containerd[764]: time="2025-12-02T18:59:09.135403206Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
Dec 02 18:59:09 functional-449836 containerd[764]: time="2025-12-02T18:59:09.138423671Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
Dec 02 18:59:09 functional-449836 containerd[764]: time="2025-12-02T18:59:09.146682786Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 02 18:59:09 functional-449836 containerd[764]: time="2025-12-02T18:59:09.147313481Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1202 19:07:19.403916 5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 19:07:19.404686 5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 19:07:19.406375 5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 19:07:19.407090 5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1202 19:07:19.408738 5507 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec 2 18:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.015127] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.494583] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.035754] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.870945] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.299680] kauditd_printk_skb: 36 callbacks suppressed
==> kernel <==
19:07:19 up 49 min, 0 user, load average: 0.47, 0.53, 0.66
Linux functional-449836 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 02 19:07:16 functional-449836 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 02 19:07:17 functional-449836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 02 19:07:17 functional-449836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 19:07:17 functional-449836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 19:07:17 functional-449836 kubelet[5319]: E1202 19:07:17.220937 5319 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 02 19:07:17 functional-449836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 19:07:17 functional-449836 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 02 19:07:17 functional-449836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 02 19:07:17 functional-449836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 19:07:17 functional-449836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 19:07:17 functional-449836 kubelet[5324]: E1202 19:07:17.963966 5324 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 02 19:07:17 functional-449836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 19:07:17 functional-449836 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 02 19:07:18 functional-449836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 02 19:07:18 functional-449836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 19:07:18 functional-449836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 19:07:18 functional-449836 kubelet[5416]: E1202 19:07:18.740396 5416 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 02 19:07:18 functional-449836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 19:07:18 functional-449836 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 02 19:07:19 functional-449836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
Dec 02 19:07:19 functional-449836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 19:07:19 functional-449836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 02 19:07:19 functional-449836 kubelet[5512]: E1202 19:07:19.480083 5512 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 02 19:07:19 functional-449836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 19:07:19 functional-449836 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-449836 -n functional-449836
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-449836 -n functional-449836: exit status 6 (333.240306ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1202 19:07:19.868520 49017 status.go:458] kubeconfig endpoint: get endpoint: "functional-449836" does not appear in /home/jenkins/minikube-integration/22021-2487/kubeconfig
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-449836" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (507.66s)