=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-428744 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1201 19:19:27.087986 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/addons-569760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:19:54.803626 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/addons-569760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:46.979914 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:46.986426 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:46.997961 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:47.019488 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:47.060957 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:47.142611 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:47.304294 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:47.625963 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:48.268113 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:49.549649 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:52.111027 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:57.233193 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:22:07.475388 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:22:27.956775 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:23:08.918972 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:24:27.088111 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/addons-569760/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:24:30.840461 4305 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-019259/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-428744 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m24.499053135s)
-- stdout --
* [functional-428744] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=21997
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21997-2497/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2497/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-428744" primary control-plane node in "functional-428744" cluster
* Pulling base image v0.0.48-1764169655-21974 ...
* Found network options:
- HTTP_PROXY=localhost:35755
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:35755 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-428744 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-428744 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00013097s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000176325s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000176325s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-428744 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run: docker inspect functional-428744
helpers_test.go:243: (dbg) docker inspect functional-428744:
-- stdout --
[
{
"Id": "0a5f71818186b6efe00b0c4fd703113b9db93449ab67fc975198a29e2a89e270",
"Created": "2025-12-01T19:17:42.064970359Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 42803,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-01T19:17:42.147832287Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:ac919894123858c63a6b115b7a0677e38aafc32ba4f00c3ebbd7c61e958451be",
"ResolvConfPath": "/var/lib/docker/containers/0a5f71818186b6efe00b0c4fd703113b9db93449ab67fc975198a29e2a89e270/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/0a5f71818186b6efe00b0c4fd703113b9db93449ab67fc975198a29e2a89e270/hostname",
"HostsPath": "/var/lib/docker/containers/0a5f71818186b6efe00b0c4fd703113b9db93449ab67fc975198a29e2a89e270/hosts",
"LogPath": "/var/lib/docker/containers/0a5f71818186b6efe00b0c4fd703113b9db93449ab67fc975198a29e2a89e270/0a5f71818186b6efe00b0c4fd703113b9db93449ab67fc975198a29e2a89e270-json.log",
"Name": "/functional-428744",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"functional-428744:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-428744",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "0a5f71818186b6efe00b0c4fd703113b9db93449ab67fc975198a29e2a89e270",
"LowerDir": "/var/lib/docker/overlay2/1a62d13b7fac30f74f1b012c6abe37674e739912606e8fb507d0d12f173758a0-init/diff:/var/lib/docker/overlay2/d615a1a7c8a8c16c226473407fa1a9f3f15588b2e938958b41966d29d830ad8b/diff",
"MergedDir": "/var/lib/docker/overlay2/1a62d13b7fac30f74f1b012c6abe37674e739912606e8fb507d0d12f173758a0/merged",
"UpperDir": "/var/lib/docker/overlay2/1a62d13b7fac30f74f1b012c6abe37674e739912606e8fb507d0d12f173758a0/diff",
"WorkDir": "/var/lib/docker/overlay2/1a62d13b7fac30f74f1b012c6abe37674e739912606e8fb507d0d12f173758a0/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "functional-428744",
"Source": "/var/lib/docker/volumes/functional-428744/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "functional-428744",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-428744",
"name.minikube.sigs.k8s.io": "functional-428744",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "51251ff92164671747855c7e0b3049c8a41696f58071f065fdb32c7fdee7e56a",
"SandboxKey": "/var/run/docker/netns/51251ff92164",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32788"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32789"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32792"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32790"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32791"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-428744": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "22:f4:3c:a2:cd:d7",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "32e9eb731fe0a52c62a7b3657fd1dee3e6c43cd7ae203e31dab0af674dff0487",
"EndpointID": "841971828a3ae8760afb7fa3bf2628bc9423d4b0ccde294eae5b28aecb27b14d",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-428744",
"0a5f71818186"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-428744 -n functional-428744
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-428744 -n functional-428744: exit status 6 (300.492043ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1201 19:26:05.559296 48511 status.go:458] kubeconfig endpoint: get endpoint: "functional-428744" does not appear in /home/jenkins/minikube-integration/21997-2497/kubeconfig
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-arm64 -p functional-428744 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ image │ functional-019259 image ls │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image load --daemon kicbase/echo-server:functional-019259 --alsologtostderr │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ ssh │ functional-019259 ssh sudo cat /usr/share/ca-certificates/43052.pem │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ ssh │ functional-019259 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ ssh │ functional-019259 ssh sudo cat /etc/test/nested/copy/4305/hosts │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image ls │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image load --daemon kicbase/echo-server:functional-019259 --alsologtostderr │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image ls │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image save kicbase/echo-server:functional-019259 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ update-context │ functional-019259 update-context --alsologtostderr -v=2 │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ update-context │ functional-019259 update-context --alsologtostderr -v=2 │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image rm kicbase/echo-server:functional-019259 --alsologtostderr │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image ls │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image ls │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image save --daemon kicbase/echo-server:functional-019259 --alsologtostderr │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image ls --format yaml --alsologtostderr │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image ls --format short --alsologtostderr │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image ls --format json --alsologtostderr │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image ls --format table --alsologtostderr │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ ssh │ functional-019259 ssh pgrep buildkitd │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ │
│ image │ functional-019259 image build -t localhost/my-image:functional-019259 testdata/build --alsologtostderr │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ image │ functional-019259 image ls │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ delete │ -p functional-019259 │ functional-019259 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ 01 Dec 25 19:17 UTC │
│ start │ -p functional-428744 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-428744 │ jenkins │ v1.37.0 │ 01 Dec 25 19:17 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/01 19:17:40
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.25.3 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1201 19:17:40.795651 42501 out.go:360] Setting OutFile to fd 1 ...
I1201 19:17:40.795788 42501 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:17:40.795792 42501 out.go:374] Setting ErrFile to fd 2...
I1201 19:17:40.795796 42501 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:17:40.796215 42501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-2497/.minikube/bin
I1201 19:17:40.796858 42501 out.go:368] Setting JSON to false
I1201 19:17:40.798379 42501 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3612,"bootTime":1764613049,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1201 19:17:40.798507 42501 start.go:143] virtualization:
I1201 19:17:40.802919 42501 out.go:179] * [functional-428744] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1201 19:17:40.807692 42501 out.go:179] - MINIKUBE_LOCATION=21997
I1201 19:17:40.807760 42501 notify.go:221] Checking for updates...
I1201 19:17:40.814670 42501 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1201 19:17:40.817994 42501 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21997-2497/kubeconfig
I1201 19:17:40.821379 42501 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-2497/.minikube
I1201 19:17:40.824576 42501 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1201 19:17:40.827737 42501 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1201 19:17:40.831215 42501 driver.go:422] Setting default libvirt URI to qemu:///system
I1201 19:17:40.863683 42501 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1201 19:17:40.863787 42501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1201 19:17:40.925169 42501 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-01 19:17:40.914937852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1201 19:17:40.925260 42501 docker.go:319] overlay module found
I1201 19:17:40.928482 42501 out.go:179] * Using the docker driver based on user configuration
I1201 19:17:40.931575 42501 start.go:309] selected driver: docker
I1201 19:17:40.931584 42501 start.go:927] validating driver "docker" against <nil>
I1201 19:17:40.931595 42501 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1201 19:17:40.932313 42501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1201 19:17:40.986812 42501 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-01 19:17:40.977848412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1201 19:17:40.986965 42501 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1201 19:17:40.987187 42501 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1201 19:17:40.990095 42501 out.go:179] * Using Docker driver with root privileges
I1201 19:17:40.993092 42501 cni.go:84] Creating CNI manager for ""
I1201 19:17:40.993158 42501 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1201 19:17:40.993165 42501 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1201 19:17:40.993267 42501 start.go:353] cluster config:
{Name:functional-428744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-428744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1201 19:17:40.996465 42501 out.go:179] * Starting "functional-428744" primary control-plane node in "functional-428744" cluster
I1201 19:17:40.999343 42501 cache.go:134] Beginning downloading kic base image for docker with containerd
I1201 19:17:41.002395 42501 out.go:179] * Pulling base image v0.0.48-1764169655-21974 ...
I1201 19:17:41.005474 42501 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1201 19:17:41.005539 42501 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon
I1201 19:17:41.024875 42501 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b in local docker daemon, skipping pull
I1201 19:17:41.024887 42501 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b exists in daemon, skipping load
W1201 19:17:41.058710 42501 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
W1201 19:17:41.255796 42501 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 status code: 404
I1201 19:17:41.256024 42501 cache.go:107] acquiring lock: {Name:mk5a09122d02521ef34c52b7e36a585d52fd9f21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1201 19:17:41.256124 42501 cache.go:115] /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1201 19:17:41.256138 42501 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 122.63µs
I1201 19:17:41.256151 42501 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1201 19:17:41.256162 42501 cache.go:107] acquiring lock: {Name:mka1ee62c1593bc03c858a35d26f9e2c2b690f2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1201 19:17:41.256173 42501 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/config.json ...
I1201 19:17:41.256197 42501 cache.go:115] /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
I1201 19:17:41.256202 42501 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 41.839µs
I1201 19:17:41.256200 42501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/config.json: {Name:mk36dc86e76691dbdae0e327f196c4488b2d3a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:17:41.256206 42501 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
I1201 19:17:41.256215 42501 cache.go:107] acquiring lock: {Name:mkad6a428cd1ee354a7cfbf702340281bc69cb07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1201 19:17:41.256287 42501 cache.go:115] /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
I1201 19:17:41.256291 42501 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 76.573µs
I1201 19:17:41.256296 42501 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
I1201 19:17:41.256305 42501 cache.go:107] acquiring lock: {Name:mkb18b26fcf663181a7a99db9cd5b6336ab8823e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1201 19:17:41.256332 42501 cache.go:115] /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
I1201 19:17:41.256336 42501 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 32.173µs
I1201 19:17:41.256340 42501 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
I1201 19:17:41.256352 42501 cache.go:243] Successfully downloaded all kic artifacts
I1201 19:17:41.256348 42501 cache.go:107] acquiring lock: {Name:mk1428d1cd9ff7a81b7e1db938ad2d6c63d6f0a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1201 19:17:41.256372 42501 cache.go:115] /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
I1201 19:17:41.256374 42501 start.go:360] acquireMachinesLock for functional-428744: {Name:mk3b5a813e1aa5988e2f3f833300a148fed85bf9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1201 19:17:41.256376 42501 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 28.661µs
I1201 19:17:41.256381 42501 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
I1201 19:17:41.256389 42501 cache.go:107] acquiring lock: {Name:mk0ada5a1c55cc71f4e4c3bcb210275f4c579244 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1201 19:17:41.256410 42501 start.go:364] duration metric: took 28.029µs to acquireMachinesLock for "functional-428744"
I1201 19:17:41.256422 42501 cache.go:115] /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
I1201 19:17:41.256426 42501 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 38.5µs
I1201 19:17:41.256430 42501 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
I1201 19:17:41.256442 42501 cache.go:107] acquiring lock: {Name:mk38216f1b78a5797f7bdaa5dbcc21785e81c0b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1201 19:17:41.256425 42501 start.go:93] Provisioning new machine with config: &{Name:functional-428744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-428744 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1201 19:17:41.256480 42501 cache.go:115] /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 exists
I1201 19:17:41.256484 42501 start.go:125] createHost starting for "" (driver="docker")
I1201 19:17:41.256490 42501 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0" took 52.514µs
I1201 19:17:41.256494 42501 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 succeeded
I1201 19:17:41.256502 42501 cache.go:107] acquiring lock: {Name:mk5372b924cb355fb27744bf15e6650674c1123b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1201 19:17:41.256530 42501 cache.go:115] /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
I1201 19:17:41.256533 42501 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 32.805µs
I1201 19:17:41.256538 42501 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
I1201 19:17:41.256542 42501 cache.go:87] Successfully saved all images to host disk.
I1201 19:17:41.261741 42501 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1201 19:17:41.262045 42501 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:35755 to docker env.
I1201 19:17:41.262120 42501 start.go:159] libmachine.API.Create for "functional-428744" (driver="docker")
I1201 19:17:41.262142 42501 client.go:173] LocalClient.Create starting
I1201 19:17:41.262241 42501 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2497/.minikube/certs/ca.pem
I1201 19:17:41.262273 42501 main.go:143] libmachine: Decoding PEM data...
I1201 19:17:41.262286 42501 main.go:143] libmachine: Parsing certificate...
I1201 19:17:41.262350 42501 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-2497/.minikube/certs/cert.pem
I1201 19:17:41.262366 42501 main.go:143] libmachine: Decoding PEM data...
I1201 19:17:41.262377 42501 main.go:143] libmachine: Parsing certificate...
I1201 19:17:41.262740 42501 cli_runner.go:164] Run: docker network inspect functional-428744 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1201 19:17:41.278827 42501 cli_runner.go:211] docker network inspect functional-428744 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1201 19:17:41.278910 42501 network_create.go:284] running [docker network inspect functional-428744] to gather additional debugging logs...
I1201 19:17:41.278925 42501 cli_runner.go:164] Run: docker network inspect functional-428744
W1201 19:17:41.299667 42501 cli_runner.go:211] docker network inspect functional-428744 returned with exit code 1
I1201 19:17:41.299688 42501 network_create.go:287] error running [docker network inspect functional-428744]: docker network inspect functional-428744: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-428744 not found
I1201 19:17:41.299699 42501 network_create.go:289] output of [docker network inspect functional-428744]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-428744 not found
** /stderr **
I1201 19:17:41.299801 42501 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1201 19:17:41.318389 42501 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197ef90}
I1201 19:17:41.318429 42501 network_create.go:124] attempt to create docker network functional-428744 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1201 19:17:41.318488 42501 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-428744 functional-428744
I1201 19:17:41.377481 42501 network_create.go:108] docker network functional-428744 192.168.49.0/24 created
I1201 19:17:41.377527 42501 kic.go:121] calculated static IP "192.168.49.2" for the "functional-428744" container
I1201 19:17:41.377599 42501 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1201 19:17:41.393797 42501 cli_runner.go:164] Run: docker volume create functional-428744 --label name.minikube.sigs.k8s.io=functional-428744 --label created_by.minikube.sigs.k8s.io=true
I1201 19:17:41.412762 42501 oci.go:103] Successfully created a docker volume functional-428744
I1201 19:17:41.412831 42501 cli_runner.go:164] Run: docker run --rm --name functional-428744-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-428744 --entrypoint /usr/bin/test -v functional-428744:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b -d /var/lib
I1201 19:17:41.975193 42501 oci.go:107] Successfully prepared a docker volume functional-428744
I1201 19:17:41.975249 42501 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
W1201 19:17:41.975381 42501 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1201 19:17:41.975486 42501 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1201 19:17:42.043513 42501 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-428744 --name functional-428744 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-428744 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-428744 --network functional-428744 --ip 192.168.49.2 --volume functional-428744:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b
I1201 19:17:42.401171 42501 cli_runner.go:164] Run: docker container inspect functional-428744 --format={{.State.Running}}
I1201 19:17:42.423931 42501 cli_runner.go:164] Run: docker container inspect functional-428744 --format={{.State.Status}}
I1201 19:17:42.453347 42501 cli_runner.go:164] Run: docker exec functional-428744 stat /var/lib/dpkg/alternatives/iptables
I1201 19:17:42.506911 42501 oci.go:144] the created container "functional-428744" has a running status.
I1201 19:17:42.506930 42501 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21997-2497/.minikube/machines/functional-428744/id_rsa...
I1201 19:17:42.709259 42501 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21997-2497/.minikube/machines/functional-428744/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1201 19:17:42.734641 42501 cli_runner.go:164] Run: docker container inspect functional-428744 --format={{.State.Status}}
I1201 19:17:42.754124 42501 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1201 19:17:42.754135 42501 kic_runner.go:114] Args: [docker exec --privileged functional-428744 chown docker:docker /home/docker/.ssh/authorized_keys]
I1201 19:17:42.811433 42501 cli_runner.go:164] Run: docker container inspect functional-428744 --format={{.State.Status}}
I1201 19:17:42.837698 42501 machine.go:94] provisionDockerMachine start ...
I1201 19:17:42.837800 42501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428744
I1201 19:17:42.868065 42501 main.go:143] libmachine: Using SSH client type: native
I1201 19:17:42.868392 42501 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1201 19:17:42.868399 42501 main.go:143] libmachine: About to run SSH command:
hostname
I1201 19:17:42.868975 42501 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49900->127.0.0.1:32788: read: connection reset by peer
I1201 19:17:46.021213 42501 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-428744
I1201 19:17:46.021228 42501 ubuntu.go:182] provisioning hostname "functional-428744"
I1201 19:17:46.021298 42501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428744
I1201 19:17:46.039633 42501 main.go:143] libmachine: Using SSH client type: native
I1201 19:17:46.039954 42501 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1201 19:17:46.039963 42501 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-428744 && echo "functional-428744" | sudo tee /etc/hostname
I1201 19:17:46.198997 42501 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-428744
I1201 19:17:46.199072 42501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428744
I1201 19:17:46.216620 42501 main.go:143] libmachine: Using SSH client type: native
I1201 19:17:46.216937 42501 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1201 19:17:46.216950 42501 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-428744' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-428744/g' /etc/hosts;
else
echo '127.0.1.1 functional-428744' | sudo tee -a /etc/hosts;
fi
fi
I1201 19:17:46.365622 42501 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1201 19:17:46.365638 42501 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21997-2497/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-2497/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-2497/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-2497/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-2497/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-2497/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-2497/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-2497/.minikube}
I1201 19:17:46.365670 42501 ubuntu.go:190] setting up certificates
I1201 19:17:46.365677 42501 provision.go:84] configureAuth start
I1201 19:17:46.365739 42501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-428744
I1201 19:17:46.384451 42501 provision.go:143] copyHostCerts
I1201 19:17:46.384514 42501 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2497/.minikube/key.pem, removing ...
I1201 19:17:46.384522 42501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2497/.minikube/key.pem
I1201 19:17:46.384600 42501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2497/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-2497/.minikube/key.pem (1679 bytes)
I1201 19:17:46.384695 42501 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2497/.minikube/ca.pem, removing ...
I1201 19:17:46.384699 42501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2497/.minikube/ca.pem
I1201 19:17:46.384723 42501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2497/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-2497/.minikube/ca.pem (1078 bytes)
I1201 19:17:46.384785 42501 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-2497/.minikube/cert.pem, removing ...
I1201 19:17:46.384788 42501 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-2497/.minikube/cert.pem
I1201 19:17:46.384810 42501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-2497/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-2497/.minikube/cert.pem (1123 bytes)
I1201 19:17:46.384863 42501 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-2497/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-2497/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-2497/.minikube/certs/ca-key.pem org=jenkins.functional-428744 san=[127.0.0.1 192.168.49.2 functional-428744 localhost minikube]
I1201 19:17:46.545818 42501 provision.go:177] copyRemoteCerts
I1201 19:17:46.545871 42501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1201 19:17:46.545911 42501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428744
I1201 19:17:46.566878 42501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2497/.minikube/machines/functional-428744/id_rsa Username:docker}
I1201 19:17:46.669055 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1201 19:17:46.686968 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1201 19:17:46.704991 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1201 19:17:46.722139 42501 provision.go:87] duration metric: took 356.440025ms to configureAuth
I1201 19:17:46.722156 42501 ubuntu.go:206] setting minikube options for container-runtime
I1201 19:17:46.722340 42501 config.go:182] Loaded profile config "functional-428744": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1201 19:17:46.722345 42501 machine.go:97] duration metric: took 3.884637267s to provisionDockerMachine
I1201 19:17:46.722357 42501 client.go:176] duration metric: took 5.460204261s to LocalClient.Create
I1201 19:17:46.722370 42501 start.go:167] duration metric: took 5.460252623s to libmachine.API.Create "functional-428744"
I1201 19:17:46.722376 42501 start.go:293] postStartSetup for "functional-428744" (driver="docker")
I1201 19:17:46.722385 42501 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1201 19:17:46.722442 42501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1201 19:17:46.722491 42501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428744
I1201 19:17:46.740164 42501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2497/.minikube/machines/functional-428744/id_rsa Username:docker}
I1201 19:17:46.850118 42501 ssh_runner.go:195] Run: cat /etc/os-release
I1201 19:17:46.853621 42501 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1201 19:17:46.853639 42501 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1201 19:17:46.853650 42501 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2497/.minikube/addons for local assets ...
I1201 19:17:46.853706 42501 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-2497/.minikube/files for local assets ...
I1201 19:17:46.853788 42501 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2497/.minikube/files/etc/ssl/certs/43052.pem -> 43052.pem in /etc/ssl/certs
I1201 19:17:46.853879 42501 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-2497/.minikube/files/etc/test/nested/copy/4305/hosts -> hosts in /etc/test/nested/copy/4305
I1201 19:17:46.853924 42501 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4305
I1201 19:17:46.861782 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/files/etc/ssl/certs/43052.pem --> /etc/ssl/certs/43052.pem (1708 bytes)
I1201 19:17:46.879952 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/files/etc/test/nested/copy/4305/hosts --> /etc/test/nested/copy/4305/hosts (40 bytes)
I1201 19:17:46.897587 42501 start.go:296] duration metric: took 175.198342ms for postStartSetup
I1201 19:17:46.897951 42501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-428744
I1201 19:17:46.915022 42501 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/config.json ...
I1201 19:17:46.915289 42501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1201 19:17:46.915329 42501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428744
I1201 19:17:46.932336 42501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2497/.minikube/machines/functional-428744/id_rsa Username:docker}
I1201 19:17:47.034649 42501 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1201 19:17:47.039853 42501 start.go:128] duration metric: took 5.783354976s to createHost
I1201 19:17:47.039870 42501 start.go:83] releasing machines lock for "functional-428744", held for 5.783453318s
I1201 19:17:47.039949 42501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-428744
I1201 19:17:47.063565 42501 out.go:179] * Found network options:
I1201 19:17:47.066378 42501 out.go:179] - HTTP_PROXY=localhost:35755
W1201 19:17:47.069440 42501 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1201 19:17:47.072453 42501 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1201 19:17:47.075420 42501 ssh_runner.go:195] Run: cat /version.json
I1201 19:17:47.075480 42501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428744
I1201 19:17:47.075518 42501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1201 19:17:47.075572 42501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-428744
I1201 19:17:47.096537 42501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2497/.minikube/machines/functional-428744/id_rsa Username:docker}
I1201 19:17:47.097632 42501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21997-2497/.minikube/machines/functional-428744/id_rsa Username:docker}
I1201 19:17:47.197731 42501 ssh_runner.go:195] Run: systemctl --version
I1201 19:17:47.295671 42501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1201 19:17:47.300189 42501 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1201 19:17:47.300250 42501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1201 19:17:47.327264 42501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1201 19:17:47.327278 42501 start.go:496] detecting cgroup driver to use...
I1201 19:17:47.327308 42501 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1201 19:17:47.327368 42501 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1201 19:17:47.342418 42501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1201 19:17:47.355549 42501 docker.go:218] disabling cri-docker service (if available) ...
I1201 19:17:47.355615 42501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1201 19:17:47.374124 42501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1201 19:17:47.393422 42501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1201 19:17:47.526646 42501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1201 19:17:47.648774 42501 docker.go:234] disabling docker service ...
I1201 19:17:47.648840 42501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1201 19:17:47.669904 42501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1201 19:17:47.683054 42501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1201 19:17:47.810306 42501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1201 19:17:47.928288 42501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1201 19:17:47.941584 42501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1201 19:17:47.955446 42501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1201 19:17:47.964530 42501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1201 19:17:47.973450 42501 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1201 19:17:47.973535 42501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1201 19:17:47.982691 42501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1201 19:17:47.991975 42501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1201 19:17:48.001413 42501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1201 19:17:48.010820 42501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1201 19:17:48.018939 42501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1201 19:17:48.027797 42501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1201 19:17:48.036839 42501 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1201 19:17:48.049533 42501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1201 19:17:48.058111 42501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1201 19:17:48.067031 42501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1201 19:17:48.195270 42501 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1201 19:17:48.295283 42501 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1201 19:17:48.295339 42501 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1201 19:17:48.299806 42501 start.go:564] Will wait 60s for crictl version
I1201 19:17:48.299872 42501 ssh_runner.go:195] Run: which crictl
I1201 19:17:48.303811 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1201 19:17:48.329647 42501 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.1.5
RuntimeApiVersion: v1
I1201 19:17:48.329712 42501 ssh_runner.go:195] Run: containerd --version
I1201 19:17:48.349260 42501 ssh_runner.go:195] Run: containerd --version
I1201 19:17:48.376822 42501 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.1.5 ...
I1201 19:17:48.379795 42501 cli_runner.go:164] Run: docker network inspect functional-428744 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1201 19:17:48.396068 42501 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1201 19:17:48.400054 42501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1201 19:17:48.409662 42501 kubeadm.go:884] updating cluster {Name:functional-428744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-428744 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1201 19:17:48.409755 42501 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1201 19:17:48.409805 42501 ssh_runner.go:195] Run: sudo crictl images --output json
I1201 19:17:48.433667 42501 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
I1201 19:17:48.433681 42501 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
I1201 19:17:48.433728 42501 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I1201 19:17:48.433941 42501 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1201 19:17:48.434028 42501 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1201 19:17:48.434123 42501 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1201 19:17:48.434223 42501 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1201 19:17:48.434307 42501 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
I1201 19:17:48.434388 42501 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
I1201 19:17:48.434474 42501 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
I1201 19:17:48.435479 42501 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1201 19:17:48.435888 42501 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1201 19:17:48.436018 42501 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1201 19:17:48.436133 42501 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
I1201 19:17:48.436243 42501 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I1201 19:17:48.436494 42501 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1201 19:17:48.436623 42501 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
I1201 19:17:48.436744 42501 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
I1201 19:17:48.833940 42501 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" and sha "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be"
I1201 19:17:48.834000 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1201 19:17:48.850210 42501 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.5-0" and sha "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42"
I1201 19:17:48.850268 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.5-0
I1201 19:17:48.854108 42501 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" and sha "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b"
I1201 19:17:48.854164 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1201 19:17:48.860713 42501 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be" in container runtime
I1201 19:17:48.860746 42501 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1201 19:17:48.860791 42501 ssh_runner.go:195] Run: which crictl
I1201 19:17:48.868342 42501 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.35.0-beta.0" and sha "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904"
I1201 19:17:48.868405 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1201 19:17:48.872551 42501 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd"
I1201 19:17:48.872612 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
I1201 19:17:48.880583 42501 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.13.1" and sha "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf"
I1201 19:17:48.880639 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.13.1
I1201 19:17:48.887422 42501 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42" in container runtime
I1201 19:17:48.887452 42501 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
I1201 19:17:48.887499 42501 ssh_runner.go:195] Run: which crictl
I1201 19:17:48.903727 42501 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b" in container runtime
I1201 19:17:48.903759 42501 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1201 19:17:48.903805 42501 ssh_runner.go:195] Run: which crictl
I1201 19:17:48.903878 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1201 19:17:48.914214 42501 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904" in container runtime
I1201 19:17:48.914247 42501 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1201 19:17:48.914293 42501 ssh_runner.go:195] Run: which crictl
I1201 19:17:48.928976 42501 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
I1201 19:17:48.929009 42501 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
I1201 19:17:48.929053 42501 ssh_runner.go:195] Run: which crictl
I1201 19:17:48.932901 42501 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
I1201 19:17:48.932934 42501 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
I1201 19:17:48.932979 42501 ssh_runner.go:195] Run: which crictl
I1201 19:17:48.933044 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
I1201 19:17:48.957631 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1201 19:17:48.957696 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1201 19:17:48.957751 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1201 19:17:48.957812 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1201 19:17:48.978974 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
I1201 19:17:48.979048 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1201 19:17:49.007501 42501 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" and sha "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4"
I1201 19:17:49.007559 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1201 19:17:49.057111 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
I1201 19:17:49.057190 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1201 19:17:49.057257 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1201 19:17:49.057338 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1201 19:17:49.076608 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
I1201 19:17:49.076670 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1201 19:17:49.084671 42501 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4" in container runtime
I1201 19:17:49.084705 42501 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1201 19:17:49.084750 42501 ssh_runner.go:195] Run: which crictl
I1201 19:17:49.169242 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
I1201 19:17:49.169325 42501 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
I1201 19:17:49.169398 42501 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
I1201 19:17:49.169471 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
I1201 19:17:49.169558 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
I1201 19:17:49.169630 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
I1201 19:17:49.169684 42501 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0
I1201 19:17:49.169730 42501 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
I1201 19:17:49.169794 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1201 19:17:49.237299 42501 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
I1201 19:17:49.237388 42501 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
I1201 19:17:49.237458 42501 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
I1201 19:17:49.237518 42501 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
I1201 19:17:49.237574 42501 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
I1201 19:17:49.237585 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (20672000 bytes)
I1201 19:17:49.278815 42501 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
I1201 19:17:49.278904 42501 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
I1201 19:17:49.278968 42501 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
I1201 19:17:49.279009 42501 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
I1201 19:17:49.287243 42501 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
I1201 19:17:49.287290 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (21148160 bytes)
I1201 19:17:49.287368 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1201 19:17:49.287424 42501 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
I1201 19:17:49.287434 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (15401984 bytes)
I1201 19:17:49.287473 42501 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
I1201 19:17:49.287480 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
I1201 19:17:49.325605 42501 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
I1201 19:17:49.325630 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
I1201 19:17:49.325673 42501 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
I1201 19:17:49.325682 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (22432256 bytes)
I1201 19:17:49.370472 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
I1201 19:17:49.371338 42501 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
I1201 19:17:49.371391 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
W1201 19:17:49.686756 42501 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
I1201 19:17:49.686883 42501 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51"
I1201 19:17:49.686936 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
I1201 19:17:49.715256 42501 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
I1201 19:17:49.715673 42501 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
I1201 19:17:49.802690 42501 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
I1201 19:17:49.802731 42501 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
I1201 19:17:49.802759 42501 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I1201 19:17:49.802807 42501 ssh_runner.go:195] Run: which crictl
I1201 19:17:49.802867 42501 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
I1201 19:17:49.802891 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (24689152 bytes)
I1201 19:17:49.806289 42501 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
I1201 19:17:49.806357 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
I1201 19:17:49.868317 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1201 19:17:51.074640 42501 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (1.268258674s)
I1201 19:17:51.074659 42501 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
I1201 19:17:51.074659 42501 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.206323311s)
I1201 19:17:51.074684 42501 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
I1201 19:17:51.074720 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1201 19:17:51.074725 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
I1201 19:17:52.050134 42501 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
I1201 19:17:52.050160 42501 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
I1201 19:17:52.050218 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0
I1201 19:17:52.050227 42501 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
I1201 19:17:53.406997 42501 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.5-0: (1.356756377s)
I1201 19:17:53.407012 42501 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.5-0 from cache
I1201 19:17:53.407027 42501 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.13.1
I1201 19:17:53.407073 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.13.1
I1201 19:17:53.407129 42501 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.356891512s)
I1201 19:17:53.407151 42501 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
I1201 19:17:53.407221 42501 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
I1201 19:17:54.359757 42501 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I1201 19:17:54.359759 42501 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
I1201 19:17:54.359780 42501 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
I1201 19:17:54.359781 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
I1201 19:17:54.359822 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
I1201 19:17:55.332537 42501 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
I1201 19:17:55.332562 42501 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
I1201 19:17:55.332610 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
I1201 19:17:56.363584 42501 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.030952436s)
I1201 19:17:56.363601 42501 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
I1201 19:17:56.363627 42501 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I1201 19:17:56.363677 42501 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
I1201 19:17:56.703938 42501 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21997-2497/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I1201 19:17:56.703978 42501 cache_images.go:125] Successfully loaded all cached images
I1201 19:17:56.703982 42501 cache_images.go:94] duration metric: took 8.270291268s to LoadCachedImages
I1201 19:17:56.703994 42501 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1201 19:17:56.704095 42501 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-428744 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-428744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1201 19:17:56.704164 42501 ssh_runner.go:195] Run: sudo crictl info
I1201 19:17:56.728810 42501 cni.go:84] Creating CNI manager for ""
I1201 19:17:56.728830 42501 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1201 19:17:56.728853 42501 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1201 19:17:56.728875 42501 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-428744 NodeName:functional-428744 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1201 19:17:56.728999 42501 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-428744"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1201 19:17:56.729073 42501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1201 19:17:56.737296 42501 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
Initiating transfer...
I1201 19:17:56.737363 42501 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
I1201 19:17:56.745368 42501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubectl.sha256
I1201 19:17:56.745392 42501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubelet.sha256
I1201 19:17:56.745433 42501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1201 19:17:56.745471 42501 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
I1201 19:17:56.745374 42501 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/arm64/kubeadm.sha256
I1201 19:17:56.745549 42501 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
I1201 19:17:56.761793 42501 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
I1201 19:17:56.761814 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (55181496 bytes)
I1201 19:17:56.761822 42501 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
I1201 19:17:56.761861 42501 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
I1201 19:17:56.761869 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (68354232 bytes)
I1201 19:17:56.795807 42501 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
I1201 19:17:56.795833 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/cache/linux/arm64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (54329636 bytes)
I1201 19:17:57.551549 42501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1201 19:17:57.564111 42501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1201 19:17:57.578870 42501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1201 19:17:57.592926 42501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1201 19:17:57.606647 42501 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1201 19:17:57.610266 42501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1201 19:17:57.620028 42501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1201 19:17:57.736624 42501 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1201 19:17:57.754907 42501 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744 for IP: 192.168.49.2
I1201 19:17:57.754917 42501 certs.go:195] generating shared ca certs ...
I1201 19:17:57.754933 42501 certs.go:227] acquiring lock for ca certs: {Name:mk44a77eee505d9292fa413ae7abec1c290fec42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:17:57.755064 42501 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-2497/.minikube/ca.key
I1201 19:17:57.755105 42501 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-2497/.minikube/proxy-client-ca.key
I1201 19:17:57.755116 42501 certs.go:257] generating profile certs ...
I1201 19:17:57.755180 42501 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/client.key
I1201 19:17:57.755193 42501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/client.crt with IP's: []
I1201 19:17:58.285091 42501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/client.crt ...
I1201 19:17:58.285107 42501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/client.crt: {Name:mk4125e51fcde1a12dfbc3371f2c3c2d9ace2a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:17:58.285301 42501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/client.key ...
I1201 19:17:58.285307 42501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/client.key: {Name:mk28fb801c4f5bedba2a0b36654671ad426b602a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:17:58.285403 42501 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/apiserver.key.910e2deb
I1201 19:17:58.285414 42501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/apiserver.crt.910e2deb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1201 19:17:58.393182 42501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/apiserver.crt.910e2deb ...
I1201 19:17:58.393195 42501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/apiserver.crt.910e2deb: {Name:mkc2fd9dae83de1ddd966cc88540933bcaf4bb23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:17:58.393357 42501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/apiserver.key.910e2deb ...
I1201 19:17:58.393364 42501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/apiserver.key.910e2deb: {Name:mk366f7903020191b512ede05c723f216665e18b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:17:58.393444 42501 certs.go:382] copying /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/apiserver.crt.910e2deb -> /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/apiserver.crt
I1201 19:17:58.393542 42501 certs.go:386] copying /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/apiserver.key.910e2deb -> /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/apiserver.key
I1201 19:17:58.393596 42501 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/proxy-client.key
I1201 19:17:58.393607 42501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/proxy-client.crt with IP's: []
I1201 19:17:58.621478 42501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/proxy-client.crt ...
I1201 19:17:58.621499 42501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/proxy-client.crt: {Name:mkbb5626c5497aed7c2a3f9dba2c0d2539c2b74c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:17:58.621662 42501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/proxy-client.key ...
I1201 19:17:58.621669 42501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/proxy-client.key: {Name:mked6cada6737b61749efaada82ba42b0f1d4726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1201 19:17:58.621846 42501 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2497/.minikube/certs/4305.pem (1338 bytes)
W1201 19:17:58.621886 42501 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-2497/.minikube/certs/4305_empty.pem, impossibly tiny 0 bytes
I1201 19:17:58.621899 42501 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2497/.minikube/certs/ca-key.pem (1675 bytes)
I1201 19:17:58.621924 42501 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2497/.minikube/certs/ca.pem (1078 bytes)
I1201 19:17:58.621946 42501 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2497/.minikube/certs/cert.pem (1123 bytes)
I1201 19:17:58.621968 42501 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2497/.minikube/certs/key.pem (1679 bytes)
I1201 19:17:58.622009 42501 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-2497/.minikube/files/etc/ssl/certs/43052.pem (1708 bytes)
I1201 19:17:58.622570 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1201 19:17:58.639969 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1201 19:17:58.656837 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1201 19:17:58.675924 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1201 19:17:58.694090 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1201 19:17:58.712147 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1201 19:17:58.730816 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1201 19:17:58.748668 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/profiles/functional-428744/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1201 19:17:58.765611 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/files/etc/ssl/certs/43052.pem --> /usr/share/ca-certificates/43052.pem (1708 bytes)
I1201 19:17:58.782440 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1201 19:17:58.799514 42501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-2497/.minikube/certs/4305.pem --> /usr/share/ca-certificates/4305.pem (1338 bytes)
I1201 19:17:58.816027 42501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1201 19:17:58.828524 42501 ssh_runner.go:195] Run: openssl version
I1201 19:17:58.834632 42501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43052.pem && ln -fs /usr/share/ca-certificates/43052.pem /etc/ssl/certs/43052.pem"
I1201 19:17:58.842479 42501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43052.pem
I1201 19:17:58.845849 42501 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 1 19:17 /usr/share/ca-certificates/43052.pem
I1201 19:17:58.845899 42501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43052.pem
I1201 19:17:58.886457 42501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43052.pem /etc/ssl/certs/3ec20f2e.0"
I1201 19:17:58.894894 42501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1201 19:17:58.903138 42501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1201 19:17:58.907005 42501 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 1 19:07 /usr/share/ca-certificates/minikubeCA.pem
I1201 19:17:58.907059 42501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1201 19:17:58.948042 42501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1201 19:17:58.956178 42501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4305.pem && ln -fs /usr/share/ca-certificates/4305.pem /etc/ssl/certs/4305.pem"
I1201 19:17:58.963941 42501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4305.pem
I1201 19:17:58.967549 42501 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 1 19:17 /usr/share/ca-certificates/4305.pem
I1201 19:17:58.967602 42501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4305.pem
I1201 19:17:59.008445 42501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4305.pem /etc/ssl/certs/51391683.0"
I1201 19:17:59.016620 42501 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1201 19:17:59.020182 42501 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1201 19:17:59.020225 42501 kubeadm.go:401] StartCluster: {Name:functional-428744 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-428744 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1201 19:17:59.020297 42501 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1201 19:17:59.020353 42501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1201 19:17:59.056021 42501 cri.go:89] found id: ""
I1201 19:17:59.056087 42501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1201 19:17:59.063935 42501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1201 19:17:59.071607 42501 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1201 19:17:59.071659 42501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1201 19:17:59.079216 42501 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1201 19:17:59.079227 42501 kubeadm.go:158] found existing configuration files:
I1201 19:17:59.079274 42501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1201 19:17:59.086684 42501 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1201 19:17:59.086740 42501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1201 19:17:59.093801 42501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1201 19:17:59.101118 42501 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1201 19:17:59.101183 42501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1201 19:17:59.108432 42501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1201 19:17:59.116024 42501 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1201 19:17:59.116079 42501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1201 19:17:59.123418 42501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1201 19:17:59.131115 42501 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1201 19:17:59.131172 42501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1201 19:17:59.138708 42501 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1201 19:17:59.179598 42501 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1201 19:17:59.179712 42501 kubeadm.go:319] [preflight] Running pre-flight checks
I1201 19:17:59.254274 42501 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1201 19:17:59.254339 42501 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1201 19:17:59.254373 42501 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1201 19:17:59.254420 42501 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1201 19:17:59.254467 42501 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1201 19:17:59.254512 42501 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1201 19:17:59.254559 42501 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1201 19:17:59.254608 42501 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1201 19:17:59.254654 42501 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1201 19:17:59.254697 42501 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1201 19:17:59.254744 42501 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1201 19:17:59.254789 42501 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1201 19:17:59.325799 42501 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1201 19:17:59.325925 42501 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1201 19:17:59.326024 42501 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1201 19:17:59.334094 42501 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1201 19:17:59.342677 42501 out.go:252] - Generating certificates and keys ...
I1201 19:17:59.342772 42501 kubeadm.go:319] [certs] Using existing ca certificate authority
I1201 19:17:59.342852 42501 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1201 19:17:59.720671 42501 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1201 19:18:00.200889 42501 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1201 19:18:00.260186 42501 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1201 19:18:00.623836 42501 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1201 19:18:00.979513 42501 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1201 19:18:00.979855 42501 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-428744 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1201 19:18:01.069998 42501 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1201 19:18:01.070151 42501 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-428744 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1201 19:18:01.302743 42501 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1201 19:18:01.772557 42501 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1201 19:18:01.982514 42501 kubeadm.go:319] [certs] Generating "sa" key and public key
I1201 19:18:01.982802 42501 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1201 19:18:02.042458 42501 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1201 19:18:02.126604 42501 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1201 19:18:02.415467 42501 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1201 19:18:02.808797 42501 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1201 19:18:02.922424 42501 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1201 19:18:02.923017 42501 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1201 19:18:02.925909 42501 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1201 19:18:02.934650 42501 out.go:252] - Booting up control plane ...
I1201 19:18:02.934764 42501 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1201 19:18:02.934850 42501 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1201 19:18:02.934935 42501 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1201 19:18:02.953544 42501 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1201 19:18:02.953650 42501 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1201 19:18:02.962916 42501 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1201 19:18:02.964359 42501 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1201 19:18:02.964424 42501 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1201 19:18:03.124615 42501 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1201 19:18:03.124749 42501 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1201 19:22:03.124338 42501 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00013097s
I1201 19:22:03.124361 42501 kubeadm.go:319]
I1201 19:22:03.124414 42501 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1201 19:22:03.124455 42501 kubeadm.go:319] - The kubelet is not running
I1201 19:22:03.124600 42501 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1201 19:22:03.124617 42501 kubeadm.go:319]
I1201 19:22:03.124738 42501 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1201 19:22:03.124770 42501 kubeadm.go:319] - 'systemctl status kubelet'
I1201 19:22:03.124798 42501 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1201 19:22:03.124801 42501 kubeadm.go:319]
I1201 19:22:03.128350 42501 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1201 19:22:03.128765 42501 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1201 19:22:03.128873 42501 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1201 19:22:03.129135 42501 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1201 19:22:03.129138 42501 kubeadm.go:319]
I1201 19:22:03.129206 42501 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1201 19:22:03.129323 42501 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-428744 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-428744 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.00013097s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
I1201 19:22:03.129413 42501 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1201 19:22:03.544717 42501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1201 19:22:03.558973 42501 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1201 19:22:03.559039 42501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1201 19:22:03.569188 42501 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1201 19:22:03.569199 42501 kubeadm.go:158] found existing configuration files:
I1201 19:22:03.569254 42501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1201 19:22:03.577468 42501 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1201 19:22:03.577612 42501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1201 19:22:03.585426 42501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1201 19:22:03.594178 42501 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1201 19:22:03.594236 42501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1201 19:22:03.602310 42501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1201 19:22:03.610648 42501 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1201 19:22:03.610704 42501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1201 19:22:03.618590 42501 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1201 19:22:03.626439 42501 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1201 19:22:03.626499 42501 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1201 19:22:03.634445 42501 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1201 19:22:03.750218 42501 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1201 19:22:03.750657 42501 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1201 19:22:03.817000 42501 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1201 19:26:04.793012 42501 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I1201 19:26:04.793034 42501 kubeadm.go:319]
I1201 19:26:04.793150 42501 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1201 19:26:04.797914 42501 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1201 19:26:04.797973 42501 kubeadm.go:319] [preflight] Running pre-flight checks
I1201 19:26:04.798097 42501 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1201 19:26:04.798165 42501 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1201 19:26:04.798203 42501 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1201 19:26:04.798255 42501 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1201 19:26:04.798311 42501 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1201 19:26:04.798369 42501 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1201 19:26:04.798413 42501 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1201 19:26:04.798470 42501 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1201 19:26:04.798519 42501 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1201 19:26:04.798563 42501 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1201 19:26:04.798616 42501 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1201 19:26:04.798665 42501 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1201 19:26:04.798756 42501 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1201 19:26:04.798885 42501 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1201 19:26:04.798969 42501 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1201 19:26:04.799030 42501 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1201 19:26:04.802109 42501 out.go:252] - Generating certificates and keys ...
I1201 19:26:04.802200 42501 kubeadm.go:319] [certs] Using existing ca certificate authority
I1201 19:26:04.802287 42501 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1201 19:26:04.802373 42501 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1201 19:26:04.802439 42501 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1201 19:26:04.802505 42501 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1201 19:26:04.802564 42501 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1201 19:26:04.802626 42501 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1201 19:26:04.802702 42501 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1201 19:26:04.802777 42501 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1201 19:26:04.802857 42501 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1201 19:26:04.802895 42501 kubeadm.go:319] [certs] Using the existing "sa" key
I1201 19:26:04.802949 42501 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1201 19:26:04.803000 42501 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1201 19:26:04.803055 42501 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1201 19:26:04.803137 42501 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1201 19:26:04.803207 42501 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1201 19:26:04.803263 42501 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1201 19:26:04.803342 42501 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1201 19:26:04.803407 42501 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1201 19:26:04.808203 42501 out.go:252] - Booting up control plane ...
I1201 19:26:04.808294 42501 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1201 19:26:04.808371 42501 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1201 19:26:04.808433 42501 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1201 19:26:04.808530 42501 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1201 19:26:04.808619 42501 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1201 19:26:04.808716 42501 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1201 19:26:04.808794 42501 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1201 19:26:04.808830 42501 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1201 19:26:04.808951 42501 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1201 19:26:04.809049 42501 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1201 19:26:04.809108 42501 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000176325s
I1201 19:26:04.809111 42501 kubeadm.go:319]
I1201 19:26:04.809163 42501 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1201 19:26:04.809192 42501 kubeadm.go:319] - The kubelet is not running
I1201 19:26:04.809289 42501 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1201 19:26:04.809292 42501 kubeadm.go:319]
I1201 19:26:04.809389 42501 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1201 19:26:04.809418 42501 kubeadm.go:319] - 'systemctl status kubelet'
I1201 19:26:04.809446 42501 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1201 19:26:04.809561 42501 kubeadm.go:403] duration metric: took 8m5.789323556s to StartCluster
I1201 19:26:04.809592 42501 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1201 19:26:04.809666 42501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1201 19:26:04.809690 42501 kubeadm.go:319]
I1201 19:26:04.834685 42501 cri.go:89] found id: ""
I1201 19:26:04.834700 42501 logs.go:282] 0 containers: []
W1201 19:26:04.834708 42501 logs.go:284] No container was found matching "kube-apiserver"
I1201 19:26:04.834714 42501 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1201 19:26:04.834773 42501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1201 19:26:04.858223 42501 cri.go:89] found id: ""
I1201 19:26:04.858237 42501 logs.go:282] 0 containers: []
W1201 19:26:04.858244 42501 logs.go:284] No container was found matching "etcd"
I1201 19:26:04.858249 42501 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1201 19:26:04.858308 42501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1201 19:26:04.883735 42501 cri.go:89] found id: ""
I1201 19:26:04.883749 42501 logs.go:282] 0 containers: []
W1201 19:26:04.883756 42501 logs.go:284] No container was found matching "coredns"
I1201 19:26:04.883762 42501 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1201 19:26:04.883836 42501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1201 19:26:04.909868 42501 cri.go:89] found id: ""
I1201 19:26:04.909882 42501 logs.go:282] 0 containers: []
W1201 19:26:04.909889 42501 logs.go:284] No container was found matching "kube-scheduler"
I1201 19:26:04.909894 42501 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1201 19:26:04.909951 42501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1201 19:26:04.934786 42501 cri.go:89] found id: ""
I1201 19:26:04.934799 42501 logs.go:282] 0 containers: []
W1201 19:26:04.934807 42501 logs.go:284] No container was found matching "kube-proxy"
I1201 19:26:04.934812 42501 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1201 19:26:04.934869 42501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1201 19:26:04.959097 42501 cri.go:89] found id: ""
I1201 19:26:04.959111 42501 logs.go:282] 0 containers: []
W1201 19:26:04.959117 42501 logs.go:284] No container was found matching "kube-controller-manager"
I1201 19:26:04.959123 42501 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1201 19:26:04.959179 42501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1201 19:26:04.986333 42501 cri.go:89] found id: ""
I1201 19:26:04.986347 42501 logs.go:282] 0 containers: []
W1201 19:26:04.986364 42501 logs.go:284] No container was found matching "kindnet"
I1201 19:26:04.986374 42501 logs.go:123] Gathering logs for kubelet ...
I1201 19:26:04.986384 42501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1201 19:26:05.041981 42501 logs.go:123] Gathering logs for dmesg ...
I1201 19:26:05.041999 42501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1201 19:26:05.059956 42501 logs.go:123] Gathering logs for describe nodes ...
I1201 19:26:05.059974 42501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1201 19:26:05.151818 42501 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1201 19:26:05.139324 5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1201 19:26:05.140031 5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1201 19:26:05.142218 5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1201 19:26:05.143039 5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1201 19:26:05.144888 5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1201 19:26:05.139324 5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1201 19:26:05.140031 5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1201 19:26:05.142218 5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1201 19:26:05.143039 5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1201 19:26:05.144888 5379 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1201 19:26:05.151829 42501 logs.go:123] Gathering logs for containerd ...
I1201 19:26:05.151839 42501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1201 19:26:05.194397 42501 logs.go:123] Gathering logs for container status ...
I1201 19:26:05.194416 42501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W1201 19:26:05.223142 42501 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000176325s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1201 19:26:05.223185 42501 out.go:285] *
W1201 19:26:05.223291 42501 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000176325s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1201 19:26:05.223351 42501 out.go:285] *
W1201 19:26:05.225623 42501 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1201 19:26:05.231663 42501 out.go:203]
W1201 19:26:05.234600 42501 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000176325s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher
W1201 19:26:05.234641 42501 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1201 19:26:05.234661 42501 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1201 19:26:05.237767 42501 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 01 19:17:51 functional-428744 containerd[765]: time="2025-12-01T19:17:51.083498806Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 01 19:17:52 functional-428744 containerd[765]: time="2025-12-01T19:17:52.036438400Z" level=info msg="No images store for sha256:5ed8f231f07481c657ad0e1d039921948e7abbc30ef6215465129012c4c4a508"
Dec 01 19:17:52 functional-428744 containerd[765]: time="2025-12-01T19:17:52.038861258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\""
Dec 01 19:17:52 functional-428744 containerd[765]: time="2025-12-01T19:17:52.051421520Z" level=info msg="ImageCreate event name:\"sha256:68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 01 19:17:52 functional-428744 containerd[765]: time="2025-12-01T19:17:52.052327287Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 01 19:17:53 functional-428744 containerd[765]: time="2025-12-01T19:17:53.398705326Z" level=info msg="No images store for sha256:89a52ae86f116708cd5ba0d54dfbf2ae3011f126ee9161c4afb19bf2a51ef285"
Dec 01 19:17:53 functional-428744 containerd[765]: time="2025-12-01T19:17:53.400844164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\""
Dec 01 19:17:53 functional-428744 containerd[765]: time="2025-12-01T19:17:53.413858019Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 01 19:17:53 functional-428744 containerd[765]: time="2025-12-01T19:17:53.414719558Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 01 19:17:54 functional-428744 containerd[765]: time="2025-12-01T19:17:54.349639760Z" level=info msg="No images store for sha256:5e4a4fe83792bf529a4e283e09069cf50cc9882d04168a33903ed6809a492e61"
Dec 01 19:17:54 functional-428744 containerd[765]: time="2025-12-01T19:17:54.351965459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\""
Dec 01 19:17:54 functional-428744 containerd[765]: time="2025-12-01T19:17:54.360122306Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 01 19:17:54 functional-428744 containerd[765]: time="2025-12-01T19:17:54.360891084Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 01 19:17:55 functional-428744 containerd[765]: time="2025-12-01T19:17:55.324260194Z" level=info msg="No images store for sha256:eb9020767c0d3bbd754f3f52cbe4c8bdd935dd5862604d6dc0b1f10422189544"
Dec 01 19:17:55 functional-428744 containerd[765]: time="2025-12-01T19:17:55.327152892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\""
Dec 01 19:17:55 functional-428744 containerd[765]: time="2025-12-01T19:17:55.337284683Z" level=info msg="ImageCreate event name:\"sha256:404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 01 19:17:55 functional-428744 containerd[765]: time="2025-12-01T19:17:55.337700335Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 01 19:17:56 functional-428744 containerd[765]: time="2025-12-01T19:17:56.355234506Z" level=info msg="No images store for sha256:64f3fb0a3392f487dbd4300c920f76dc3de2961e11fd6bfbedc75c0d25b1954c"
Dec 01 19:17:56 functional-428744 containerd[765]: time="2025-12-01T19:17:56.357772527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\""
Dec 01 19:17:56 functional-428744 containerd[765]: time="2025-12-01T19:17:56.367446729Z" level=info msg="ImageCreate event name:\"sha256:ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 01 19:17:56 functional-428744 containerd[765]: time="2025-12-01T19:17:56.368088143Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.35.0-beta.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 01 19:17:56 functional-428744 containerd[765]: time="2025-12-01T19:17:56.693514702Z" level=info msg="No images store for sha256:7475c7d18769df89a804d5bebf679dbf94886f3626f07a2be923beaa0cc7e5b0"
Dec 01 19:17:56 functional-428744 containerd[765]: time="2025-12-01T19:17:56.695828232Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\""
Dec 01 19:17:56 functional-428744 containerd[765]: time="2025-12-01T19:17:56.704080106Z" level=info msg="ImageCreate event name:\"sha256:66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Dec 01 19:17:56 functional-428744 containerd[765]: time="2025-12-01T19:17:56.704407443Z" level=info msg="ImageUpdate event name:\"gcr.io/k8s-minikube/storage-provisioner:v5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1201 19:26:06.193914 5504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1201 19:26:06.194461 5504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1201 19:26:06.196143 5504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1201 19:26:06.196684 5504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1201 19:26:06.198234 5504 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec 1 18:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.015295] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.547776] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.034333] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.774491] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.932193] kauditd_printk_skb: 36 callbacks suppressed
==> kernel <==
19:26:06 up 1:08, 0 user, load average: 0.17, 0.50, 0.78
Linux functional-428744 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 01 19:26:02 functional-428744 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 01 19:26:03 functional-428744 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 01 19:26:03 functional-428744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 01 19:26:03 functional-428744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 01 19:26:03 functional-428744 kubelet[5305]: E1201 19:26:03.634308 5305 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 01 19:26:03 functional-428744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 01 19:26:03 functional-428744 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 01 19:26:04 functional-428744 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 01 19:26:04 functional-428744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 01 19:26:04 functional-428744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 01 19:26:04 functional-428744 kubelet[5311]: E1201 19:26:04.382263 5311 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 01 19:26:04 functional-428744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 01 19:26:04 functional-428744 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 01 19:26:05 functional-428744 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 01 19:26:05 functional-428744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 01 19:26:05 functional-428744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 01 19:26:05 functional-428744 kubelet[5383]: E1201 19:26:05.155844 5383 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 01 19:26:05 functional-428744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 01 19:26:05 functional-428744 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 01 19:26:05 functional-428744 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 01 19:26:05 functional-428744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 01 19:26:05 functional-428744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 01 19:26:05 functional-428744 kubelet[5425]: E1201 19:26:05.886513 5425 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 01 19:26:05 functional-428744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 01 19:26:05 functional-428744 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-428744 -n functional-428744
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-428744 -n functional-428744: exit status 6 (353.883418ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1201 19:26:06.693731 48727 status.go:458] kubeconfig endpoint: get endpoint: "functional-428744" does not appear in /home/jenkins/minikube-integration/21997-2497/kubeconfig
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-428744" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (505.96s)