=== RUN TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run: out/minikube-linux-arm64 start -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0
E1212 00:08:41.615176 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:09:09.326024 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.046521 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.052979 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.064381 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.085847 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.127321 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.208819 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.370420 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:57.692195 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:58.334403 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:10:59.615809 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:02.177914 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:07.299378 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:17.541536 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:11:38.022923 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:12:18.984436 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:13:40.908850 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-095481/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 00:13:41.615080 4290 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/addons-962736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m19.450729647s)
-- stdout --
* [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=22101
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "functional-767012" primary control-plane node in "functional-767012" cluster
* Pulling base image v0.0.48-1765275396-22083 ...
* Found network options:
- HTTP_PROXY=localhost:36001
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
* Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
-- /stdout --
** stderr **
! Local proxy ignored: not passing HTTP_PROXY=localhost:36001 to docker env.
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-767012 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-767012 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001247859s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000289392s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
*
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000289392s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
* Related issue: https://github.com/kubernetes/minikube/issues/4172
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run: docker inspect functional-767012
helpers_test.go:244: (dbg) docker inspect functional-767012:
-- stdout --
[
{
"Id": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
"Created": "2025-12-12T00:06:52.261765556Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 42951,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-12-12T00:06:52.317917194Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:334f1182332719d3672d91a12e83f7529929c12b116ee304aabb54ea4d8debdf",
"ResolvConfPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hostname",
"HostsPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/hosts",
"LogPath": "/var/lib/docker/containers/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e/6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e-json.log",
"Name": "/functional-767012",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"functional-767012:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "functional-767012",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 4294967296,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 8589934592,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "6585a82fe5e65800e54a53321cd6cdb00e472304bd15379f7d12b62971cdb86e",
"LowerDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70-init/diff:/var/lib/docker/overlay2/4c0e5370e4fd7b4e6c6a79620ef377d7d55826709cd277e0cfa49c6005af0314/diff",
"MergedDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/merged",
"UpperDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/diff",
"WorkDir": "/var/lib/docker/overlay2/6cbe17e6a502dfa970c93b0618a4538ffd1b4c8b0db59972b087f0317d6c6f70/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "functional-767012",
"Source": "/var/lib/docker/volumes/functional-767012/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "functional-767012",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8441/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "functional-767012",
"name.minikube.sigs.k8s.io": "functional-767012",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "e781257da3adf1d3284ab2a6de0168c3db7957f25a7e53d0015250294193762d",
"SandboxKey": "/var/run/docker/netns/e781257da3ad",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32788"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32789"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32792"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32790"
}
],
"8441/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "32791"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"functional-767012": {
"IPAMConfig": {
"IPv4Address": "192.168.49.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "32:4d:78:ba:7d:83",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "83467cc4cb13818b98ec0d7cb5fc0064ea6eb2c8db4256a8a81330921aa2d9a4",
"EndpointID": "b787b732d8d748776ceeb6e65fab51cc1e79758446bc85ac20043b35593fab12",
"Gateway": "192.168.49.1",
"IPAddress": "192.168.49.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"functional-767012",
"6585a82fe5e6"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:248: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-767012 -n functional-767012: exit status 6 (308.803102ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1212 00:15:06.929661 48050 status.go:458] kubeconfig endpoint: get endpoint: "functional-767012" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-arm64 -p functional-767012 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs:
-- stdout --
==> Audit <==
┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ ssh │ functional-095481 ssh sudo cat /etc/ssl/certs/51391683.0 │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ ssh │ functional-095481 ssh sudo cat /etc/ssl/certs/42902.pem │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ ssh │ functional-095481 ssh sudo cat /usr/share/ca-certificates/42902.pem │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image ls │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ ssh │ functional-095481 ssh sudo cat /etc/ssl/certs/3ec20f2e.0 │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image load --daemon kicbase/echo-server:functional-095481 --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image ls │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image save kicbase/echo-server:functional-095481 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ update-context │ functional-095481 update-context --alsologtostderr -v=2 │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image rm kicbase/echo-server:functional-095481 --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ update-context │ functional-095481 update-context --alsologtostderr -v=2 │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ update-context │ functional-095481 update-context --alsologtostderr -v=2 │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image ls │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image ls │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image save --daemon kicbase/echo-server:functional-095481 --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image ls --format yaml --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image ls --format short --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image ls --format table --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image ls --format json --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ ssh │ functional-095481 ssh pgrep buildkitd │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ │
│ image │ functional-095481 image build -t localhost/my-image:functional-095481 testdata/build --alsologtostderr │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ image │ functional-095481 image ls │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ delete │ -p functional-095481 │ functional-095481 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ 12 Dec 25 00:06 UTC │
│ start │ -p functional-767012 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --container-runtime=containerd --kubernetes-version=v1.35.0-beta.0 │ functional-767012 │ jenkins │ v1.37.0 │ 12 Dec 25 00:06 UTC │ │
└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/12 00:06:47
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.25.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1212 00:06:47.197153 42564 out.go:360] Setting OutFile to fd 1 ...
I1212 00:06:47.197256 42564 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:47.197260 42564 out.go:374] Setting ErrFile to fd 2...
I1212 00:06:47.197263 42564 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 00:06:47.197525 42564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22101-2343/.minikube/bin
I1212 00:06:47.197938 42564 out.go:368] Setting JSON to false
I1212 00:06:47.198741 42564 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2954,"bootTime":1765495054,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I1212 00:06:47.198800 42564 start.go:143] virtualization:
I1212 00:06:47.203474 42564 out.go:179] * [functional-767012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
I1212 00:06:47.207157 42564 out.go:179] - MINIKUBE_LOCATION=22101
I1212 00:06:47.207225 42564 notify.go:221] Checking for updates...
I1212 00:06:47.214266 42564 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1212 00:06:47.219645 42564 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22101-2343/kubeconfig
I1212 00:06:47.222613 42564 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22101-2343/.minikube
I1212 00:06:47.225562 42564 out.go:179] - MINIKUBE_BIN=out/minikube-linux-arm64
I1212 00:06:47.228426 42564 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1212 00:06:47.231533 42564 driver.go:422] Setting default libvirt URI to qemu:///system
I1212 00:06:47.257845 42564 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
I1212 00:06:47.257959 42564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 00:06:47.326960 42564 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-12 00:06:47.315772994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 00:06:47.327104 42564 docker.go:319] overlay module found
I1212 00:06:47.330432 42564 out.go:179] * Using the docker driver based on user configuration
I1212 00:06:47.333260 42564 start.go:309] selected driver: docker
I1212 00:06:47.333269 42564 start.go:927] validating driver "docker" against <nil>
I1212 00:06:47.333281 42564 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1212 00:06:47.333990 42564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 00:06:47.395640 42564 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-12 00:06:47.386140918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 00:06:47.395781 42564 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1212 00:06:47.395996 42564 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1212 00:06:47.398838 42564 out.go:179] * Using Docker driver with root privileges
I1212 00:06:47.401888 42564 cni.go:84] Creating CNI manager for ""
I1212 00:06:47.401972 42564 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1212 00:06:47.401982 42564 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
I1212 00:06:47.402086 42564 start.go:353] cluster config:
{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1212 00:06:47.405203 42564 out.go:179] * Starting "functional-767012" primary control-plane node in "functional-767012" cluster
I1212 00:06:47.408001 42564 cache.go:134] Beginning downloading kic base image for docker with containerd
I1212 00:06:47.411027 42564 out.go:179] * Pulling base image v0.0.48-1765275396-22083 ...
I1212 00:06:47.413977 42564 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1212 00:06:47.414019 42564 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4
I1212 00:06:47.414028 42564 cache.go:65] Caching tarball of preloaded images
I1212 00:06:47.414028 42564 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon
I1212 00:06:47.414129 42564 preload.go:238] Found /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1212 00:06:47.414138 42564 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on containerd
I1212 00:06:47.414519 42564 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
I1212 00:06:47.414537 42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json: {Name:mk5167fad948f74f480c4d53e31ac2b2252b3057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:06:47.434448 42564 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f in local docker daemon, skipping pull
I1212 00:06:47.434459 42564 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f exists in daemon, skipping load
I1212 00:06:47.434478 42564 cache.go:243] Successfully downloaded all kic artifacts
I1212 00:06:47.434508 42564 start.go:360] acquireMachinesLock for functional-767012: {Name:mk41cf89e93a3830367886ebbef2bb8f6e99e3f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:06:47.434619 42564 start.go:364] duration metric: took 97.502µs to acquireMachinesLock for "functional-767012"
I1212 00:06:47.434645 42564 start.go:93] Provisioning new machine with config: &{Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNS
Log:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1212 00:06:47.434754 42564 start.go:125] createHost starting for "" (driver="docker")
I1212 00:06:47.439881 42564 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
W1212 00:06:47.440152 42564 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:36001 to docker env.
I1212 00:06:47.440177 42564 start.go:159] libmachine.API.Create for "functional-767012" (driver="docker")
I1212 00:06:47.440203 42564 client.go:173] LocalClient.Create starting
I1212 00:06:47.440269 42564 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem
I1212 00:06:47.440302 42564 main.go:143] libmachine: Decoding PEM data...
I1212 00:06:47.440321 42564 main.go:143] libmachine: Parsing certificate...
I1212 00:06:47.440370 42564 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem
I1212 00:06:47.440385 42564 main.go:143] libmachine: Decoding PEM data...
I1212 00:06:47.440396 42564 main.go:143] libmachine: Parsing certificate...
I1212 00:06:47.440748 42564 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1212 00:06:47.456064 42564 cli_runner.go:211] docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1212 00:06:47.456156 42564 network_create.go:284] running [docker network inspect functional-767012] to gather additional debugging logs...
I1212 00:06:47.456172 42564 cli_runner.go:164] Run: docker network inspect functional-767012
W1212 00:06:47.471138 42564 cli_runner.go:211] docker network inspect functional-767012 returned with exit code 1
I1212 00:06:47.471155 42564 network_create.go:287] error running [docker network inspect functional-767012]: docker network inspect functional-767012: exit status 1
stdout:
[]
stderr:
Error response from daemon: network functional-767012 not found
I1212 00:06:47.471167 42564 network_create.go:289] output of [docker network inspect functional-767012]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network functional-767012 not found
** /stderr **
I1212 00:06:47.471269 42564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 00:06:47.487406 42564 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018befc0}
I1212 00:06:47.487438 42564 network_create.go:124] attempt to create docker network functional-767012 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1212 00:06:47.487491 42564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-767012 functional-767012
I1212 00:06:47.544730 42564 network_create.go:108] docker network functional-767012 192.168.49.0/24 created
I1212 00:06:47.544751 42564 kic.go:121] calculated static IP "192.168.49.2" for the "functional-767012" container
I1212 00:06:47.544833 42564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1212 00:06:47.559700 42564 cli_runner.go:164] Run: docker volume create functional-767012 --label name.minikube.sigs.k8s.io=functional-767012 --label created_by.minikube.sigs.k8s.io=true
I1212 00:06:47.576753 42564 oci.go:103] Successfully created a docker volume functional-767012
I1212 00:06:47.576841 42564 cli_runner.go:164] Run: docker run --rm --name functional-767012-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-767012 --entrypoint /usr/bin/test -v functional-767012:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -d /var/lib
I1212 00:06:48.111083 42564 oci.go:107] Successfully prepared a docker volume functional-767012
I1212 00:06:48.111148 42564 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1212 00:06:48.111156 42564 kic.go:194] Starting extracting preloaded images to volume ...
I1212 00:06:48.111231 42564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-767012:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir
I1212 00:06:52.189648 42564 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22101-2343/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v functional-767012:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f -I lz4 -xf /preloaded.tar -C /extractDir: (4.078385728s)
I1212 00:06:52.189669 42564 kic.go:203] duration metric: took 4.078509634s to extract preloaded images to volume ...
W1212 00:06:52.189848 42564 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1212 00:06:52.189992 42564 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1212 00:06:52.247289 42564 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-767012 --name functional-767012 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-767012 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-767012 --network functional-767012 --ip 192.168.49.2 --volume functional-767012:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f
I1212 00:06:52.543181 42564 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Running}}
I1212 00:06:52.566334 42564 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
I1212 00:06:52.586972 42564 cli_runner.go:164] Run: docker exec functional-767012 stat /var/lib/dpkg/alternatives/iptables
I1212 00:06:52.639202 42564 oci.go:144] the created container "functional-767012" has a running status.
I1212 00:06:52.639220 42564 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa...
I1212 00:06:52.963511 42564 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1212 00:06:52.991172 42564 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
I1212 00:06:53.025999 42564 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1212 00:06:53.026023 42564 kic_runner.go:114] Args: [docker exec --privileged functional-767012 chown docker:docker /home/docker/.ssh/authorized_keys]
I1212 00:06:53.103766 42564 cli_runner.go:164] Run: docker container inspect functional-767012 --format={{.State.Status}}
I1212 00:06:53.123815 42564 machine.go:94] provisionDockerMachine start ...
I1212 00:06:53.123889 42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:06:53.141545 42564 main.go:143] libmachine: Using SSH client type: native
I1212 00:06:53.141870 42564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1212 00:06:53.141877 42564 main.go:143] libmachine: About to run SSH command:
hostname
I1212 00:06:53.142490 42564 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36058->127.0.0.1:32788: read: connection reset by peer
I1212 00:06:56.290429 42564 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
I1212 00:06:56.290443 42564 ubuntu.go:182] provisioning hostname "functional-767012"
I1212 00:06:56.290506 42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:06:56.307103 42564 main.go:143] libmachine: Using SSH client type: native
I1212 00:06:56.307406 42564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1212 00:06:56.307417 42564 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-767012 && echo "functional-767012" | sudo tee /etc/hostname
I1212 00:06:56.464023 42564 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-767012
I1212 00:06:56.464090 42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:06:56.481507 42564 main.go:143] libmachine: Using SSH client type: native
I1212 00:06:56.481803 42564 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil> [] 0s} 127.0.0.1 32788 <nil> <nil>}
I1212 00:06:56.481816 42564 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-767012' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-767012/g' /etc/hosts;
else
echo '127.0.1.1 functional-767012' | sudo tee -a /etc/hosts;
fi
fi
I1212 00:06:56.631075 42564 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1212 00:06:56.631100 42564 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22101-2343/.minikube CaCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22101-2343/.minikube}
I1212 00:06:56.631126 42564 ubuntu.go:190] setting up certificates
I1212 00:06:56.631133 42564 provision.go:84] configureAuth start
I1212 00:06:56.631191 42564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
I1212 00:06:56.647644 42564 provision.go:143] copyHostCerts
I1212 00:06:56.647701 42564 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem, removing ...
I1212 00:06:56.647715 42564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem
I1212 00:06:56.647794 42564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/ca.pem (1082 bytes)
I1212 00:06:56.647879 42564 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem, removing ...
I1212 00:06:56.647883 42564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem
I1212 00:06:56.647907 42564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/cert.pem (1123 bytes)
I1212 00:06:56.647956 42564 exec_runner.go:144] found /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem, removing ...
I1212 00:06:56.647960 42564 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem
I1212 00:06:56.647983 42564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22101-2343/.minikube/key.pem (1675 bytes)
I1212 00:06:56.648026 42564 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem org=jenkins.functional-767012 san=[127.0.0.1 192.168.49.2 functional-767012 localhost minikube]
I1212 00:06:56.826531 42564 provision.go:177] copyRemoteCerts
I1212 00:06:56.826589 42564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1212 00:06:56.826628 42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:06:56.843384 42564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
I1212 00:06:56.946302 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1212 00:06:56.962548 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1212 00:06:56.978846 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1212 00:06:56.994866 42564 provision.go:87] duration metric: took 363.718486ms to configureAuth
I1212 00:06:56.994888 42564 ubuntu.go:206] setting minikube options for container-runtime
I1212 00:06:56.995159 42564 config.go:182] Loaded profile config "functional-767012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-beta.0
I1212 00:06:56.995165 42564 machine.go:97] duration metric: took 3.871340713s to provisionDockerMachine
I1212 00:06:56.995171 42564 client.go:176] duration metric: took 9.554963583s to LocalClient.Create
I1212 00:06:56.995187 42564 start.go:167] duration metric: took 9.555010139s to libmachine.API.Create "functional-767012"
I1212 00:06:56.995193 42564 start.go:293] postStartSetup for "functional-767012" (driver="docker")
I1212 00:06:56.995203 42564 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1212 00:06:56.995259 42564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1212 00:06:56.995315 42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:06:57.015904 42564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
I1212 00:06:57.122793 42564 ssh_runner.go:195] Run: cat /etc/os-release
I1212 00:06:57.125967 42564 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1212 00:06:57.125984 42564 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
I1212 00:06:57.125995 42564 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/addons for local assets ...
I1212 00:06:57.126050 42564 filesync.go:126] Scanning /home/jenkins/minikube-integration/22101-2343/.minikube/files for local assets ...
I1212 00:06:57.126132 42564 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem -> 42902.pem in /etc/ssl/certs
I1212 00:06:57.126210 42564 filesync.go:149] local asset: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts -> hosts in /etc/test/nested/copy/4290
I1212 00:06:57.126257 42564 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4290
I1212 00:06:57.133806 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /etc/ssl/certs/42902.pem (1708 bytes)
I1212 00:06:57.151171 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/test/nested/copy/4290/hosts --> /etc/test/nested/copy/4290/hosts (40 bytes)
I1212 00:06:57.168939 42564 start.go:296] duration metric: took 173.73226ms for postStartSetup
I1212 00:06:57.169298 42564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
I1212 00:06:57.186152 42564 profile.go:143] Saving config to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/config.json ...
I1212 00:06:57.186427 42564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1212 00:06:57.186467 42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:06:57.203405 42564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
I1212 00:06:57.303891 42564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1212 00:06:57.308179 42564 start.go:128] duration metric: took 9.873412289s to createHost
I1212 00:06:57.308194 42564 start.go:83] releasing machines lock for "functional-767012", held for 9.873568466s
I1212 00:06:57.308273 42564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-767012
I1212 00:06:57.327166 42564 out.go:179] * Found network options:
I1212 00:06:57.330146 42564 out.go:179] - HTTP_PROXY=localhost:36001
W1212 00:06:57.333163 42564 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
I1212 00:06:57.336043 42564 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
I1212 00:06:57.338865 42564 ssh_runner.go:195] Run: cat /version.json
I1212 00:06:57.338889 42564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1212 00:06:57.338905 42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:06:57.338970 42564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-767012
I1212 00:06:57.359517 42564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
I1212 00:06:57.372630 42564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22101-2343/.minikube/machines/functional-767012/id_rsa Username:docker}
I1212 00:06:57.554474 42564 ssh_runner.go:195] Run: systemctl --version
I1212 00:06:57.560840 42564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1212 00:06:57.565029 42564 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1212 00:06:57.565098 42564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1212 00:06:57.591094 42564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
I1212 00:06:57.591106 42564 start.go:496] detecting cgroup driver to use...
I1212 00:06:57.591137 42564 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1212 00:06:57.591184 42564 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1212 00:06:57.606473 42564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1212 00:06:57.619631 42564 docker.go:218] disabling cri-docker service (if available) ...
I1212 00:06:57.619683 42564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1212 00:06:57.637563 42564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1212 00:06:57.656156 42564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1212 00:06:57.782296 42564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1212 00:06:57.917121 42564 docker.go:234] disabling docker service ...
I1212 00:06:57.917206 42564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1212 00:06:57.938400 42564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1212 00:06:57.951630 42564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1212 00:06:58.080881 42564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1212 00:06:58.209260 42564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1212 00:06:58.222334 42564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1212 00:06:58.236073 42564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1212 00:06:58.244429 42564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1212 00:06:58.252806 42564 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1212 00:06:58.252872 42564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1212 00:06:58.261363 42564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:06:58.269629 42564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1212 00:06:58.277786 42564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1212 00:06:58.285990 42564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1212 00:06:58.293335 42564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1212 00:06:58.301289 42564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1212 00:06:58.309301 42564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1212 00:06:58.318040 42564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1212 00:06:58.325342 42564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1212 00:06:58.332418 42564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:06:58.455476 42564 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1212 00:06:58.579818 42564 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
I1212 00:06:58.579878 42564 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1212 00:06:58.584006 42564 start.go:564] Will wait 60s for crictl version
I1212 00:06:58.584061 42564 ssh_runner.go:195] Run: which crictl
I1212 00:06:58.587557 42564 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
I1212 00:06:58.612139 42564 start.go:580] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v2.2.0
RuntimeApiVersion: v1
I1212 00:06:58.612207 42564 ssh_runner.go:195] Run: containerd --version
I1212 00:06:58.632080 42564 ssh_runner.go:195] Run: containerd --version
I1212 00:06:58.656605 42564 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on containerd 2.2.0 ...
I1212 00:06:58.659604 42564 cli_runner.go:164] Run: docker network inspect functional-767012 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 00:06:58.675390 42564 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1212 00:06:58.678842 42564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 00:06:58.688190 42564 kubeadm.go:884] updating cluster {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1212 00:06:58.688287 42564 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime containerd
I1212 00:06:58.688347 42564 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 00:06:58.713204 42564 containerd.go:627] all images are preloaded for containerd runtime.
I1212 00:06:58.713216 42564 containerd.go:534] Images already preloaded, skipping extraction
I1212 00:06:58.713271 42564 ssh_runner.go:195] Run: sudo crictl images --output json
I1212 00:06:58.737086 42564 containerd.go:627] all images are preloaded for containerd runtime.
I1212 00:06:58.737097 42564 cache_images.go:86] Images are preloaded, skipping loading
I1212 00:06:58.737102 42564 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 containerd true true} ...
I1212 00:06:58.737183 42564 kubeadm.go:947] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-767012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
[Install]
config:
{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1212 00:06:58.737242 42564 ssh_runner.go:195] Run: sudo crictl info
I1212 00:06:58.760814 42564 cni.go:84] Creating CNI manager for ""
I1212 00:06:58.760824 42564 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1212 00:06:58.760842 42564 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1212 00:06:58.760867 42564 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-767012 NodeName:functional-767012 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1212 00:06:58.760981 42564 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "functional-767012"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.49.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.35.0-beta.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1212 00:06:58.761045 42564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
I1212 00:06:58.768439 42564 binaries.go:51] Found k8s binaries, skipping transfer
I1212 00:06:58.768494 42564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1212 00:06:58.775651 42564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
I1212 00:06:58.787812 42564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
I1212 00:06:58.800601 42564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2237 bytes)
I1212 00:06:58.813044 42564 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1212 00:06:58.816406 42564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1212 00:06:58.826172 42564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1212 00:06:58.941368 42564 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1212 00:06:58.958454 42564 certs.go:69] Setting up /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012 for IP: 192.168.49.2
I1212 00:06:58.958464 42564 certs.go:195] generating shared ca certs ...
I1212 00:06:58.958478 42564 certs.go:227] acquiring lock for ca certs: {Name:mk18ed2fce74cbc4ee01c0f71e2dbdd98ccce1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:06:58.958645 42564 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key
I1212 00:06:58.958697 42564 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key
I1212 00:06:58.958703 42564 certs.go:257] generating profile certs ...
I1212 00:06:58.958767 42564 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key
I1212 00:06:58.958777 42564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt with IP's: []
I1212 00:06:59.414847 42564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt ...
I1212 00:06:59.414877 42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.crt: {Name:mk2e53d59ca31de5ec122adc19e355e9d6363f31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:06:59.415090 42564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key ...
I1212 00:06:59.415097 42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/client.key: {Name:mkc5410a305906ba4b2f4736459e0bd9517fa04d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:06:59.415186 42564 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4
I1212 00:06:59.415197 42564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt.fcbff5a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
I1212 00:06:59.614173 42564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt.fcbff5a4 ...
I1212 00:06:59.614187 42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt.fcbff5a4: {Name:mkcdecde159a1729b44ee0ef69d47828ae2fafaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:06:59.614359 42564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4 ...
I1212 00:06:59.614366 42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4: {Name:mk527300555574f037458024e4c71b6423b90770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:06:59.614446 42564 certs.go:382] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt.fcbff5a4 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt
I1212 00:06:59.614523 42564 certs.go:386] copying /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key.fcbff5a4 -> /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key
I1212 00:06:59.614573 42564 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key
I1212 00:06:59.614588 42564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt with IP's: []
I1212 00:06:59.869503 42564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt ...
I1212 00:06:59.869517 42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt: {Name:mk283ec0ae854b1bd17de590b872acb2c9ee389c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:06:59.869703 42564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key ...
I1212 00:06:59.869710 42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key: {Name:mk29d442703b1a29103830f9d8cac58a7d3cd2db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 00:06:59.869899 42564 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem (1338 bytes)
W1212 00:06:59.869948 42564 certs.go:480] ignoring /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290_empty.pem, impossibly tiny 0 bytes
I1212 00:06:59.869958 42564 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca-key.pem (1679 bytes)
I1212 00:06:59.869987 42564 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/ca.pem (1082 bytes)
I1212 00:06:59.870013 42564 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/cert.pem (1123 bytes)
I1212 00:06:59.870035 42564 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/certs/key.pem (1675 bytes)
I1212 00:06:59.870078 42564 certs.go:484] found cert: /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem (1708 bytes)
I1212 00:06:59.870629 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1212 00:06:59.888316 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1212 00:06:59.906312 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1212 00:06:59.924325 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1212 00:06:59.941487 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1212 00:06:59.958418 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1212 00:06:59.975811 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1212 00:06:59.992136 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/profiles/functional-767012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1212 00:07:00.045062 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/certs/4290.pem --> /usr/share/ca-certificates/4290.pem (1338 bytes)
I1212 00:07:00.101976 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/files/etc/ssl/certs/42902.pem --> /usr/share/ca-certificates/42902.pem (1708 bytes)
I1212 00:07:00.178024 42564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22101-2343/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1212 00:07:00.226934 42564 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1212 00:07:00.256909 42564 ssh_runner.go:195] Run: openssl version
I1212 00:07:00.275933 42564 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4290.pem
I1212 00:07:00.290818 42564 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4290.pem /etc/ssl/certs/4290.pem
I1212 00:07:00.302618 42564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4290.pem
I1212 00:07:00.316383 42564 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 00:06 /usr/share/ca-certificates/4290.pem
I1212 00:07:00.316454 42564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4290.pem
I1212 00:07:00.375820 42564 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1212 00:07:00.384711 42564 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4290.pem /etc/ssl/certs/51391683.0
I1212 00:07:00.394134 42564 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42902.pem
I1212 00:07:00.403394 42564 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42902.pem /etc/ssl/certs/42902.pem
I1212 00:07:00.412367 42564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42902.pem
I1212 00:07:00.417362 42564 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 00:06 /usr/share/ca-certificates/42902.pem
I1212 00:07:00.417441 42564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42902.pem
I1212 00:07:00.465285 42564 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1212 00:07:00.473542 42564 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42902.pem /etc/ssl/certs/3ec20f2e.0
I1212 00:07:00.481276 42564 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1212 00:07:00.489069 42564 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1212 00:07:00.496931 42564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1212 00:07:00.500927 42564 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:56 /usr/share/ca-certificates/minikubeCA.pem
I1212 00:07:00.500992 42564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1212 00:07:00.542408 42564 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1212 00:07:00.550763 42564 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
I1212 00:07:00.559527 42564 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1212 00:07:00.564282 42564 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1212 00:07:00.564324 42564 kubeadm.go:401] StartCluster: {Name:functional-767012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765275396-22083@sha256:ffa93f7bad1d2c0a7acfa6e97f1eec0e4955680d08c3904e49db297a10f7f89f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-767012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1212 00:07:00.564397 42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1212 00:07:00.564465 42564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1212 00:07:00.601747 42564 cri.go:89] found id: ""
I1212 00:07:00.601804 42564 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1212 00:07:00.609766 42564 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1212 00:07:00.617686 42564 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1212 00:07:00.617745 42564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1212 00:07:00.626117 42564 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1212 00:07:00.626126 42564 kubeadm.go:158] found existing configuration files:
I1212 00:07:00.626189 42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1212 00:07:00.634065 42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1212 00:07:00.634122 42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1212 00:07:00.641914 42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1212 00:07:00.649757 42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1212 00:07:00.649818 42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1212 00:07:00.657377 42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1212 00:07:00.665403 42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1212 00:07:00.665461 42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1212 00:07:00.673023 42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1212 00:07:00.680896 42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1212 00:07:00.680953 42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1212 00:07:00.688727 42564 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1212 00:07:00.726537 42564 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1212 00:07:00.726756 42564 kubeadm.go:319] [preflight] Running pre-flight checks
I1212 00:07:00.803761 42564 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1212 00:07:00.803824 42564 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1212 00:07:00.803859 42564 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1212 00:07:00.803902 42564 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1212 00:07:00.803949 42564 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1212 00:07:00.803995 42564 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1212 00:07:00.804042 42564 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1212 00:07:00.804089 42564 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1212 00:07:00.804136 42564 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1212 00:07:00.804180 42564 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1212 00:07:00.804235 42564 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1212 00:07:00.804286 42564 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1212 00:07:00.881105 42564 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1212 00:07:00.881209 42564 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1212 00:07:00.881299 42564 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1212 00:07:00.886747 42564 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1212 00:07:00.893249 42564 out.go:252] - Generating certificates and keys ...
I1212 00:07:00.893345 42564 kubeadm.go:319] [certs] Using existing ca certificate authority
I1212 00:07:00.893418 42564 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1212 00:07:00.973301 42564 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1212 00:07:01.232024 42564 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1212 00:07:01.342941 42564 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1212 00:07:01.419328 42564 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1212 00:07:01.773039 42564 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1212 00:07:01.773298 42564 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-767012 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1212 00:07:01.973487 42564 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1212 00:07:01.973758 42564 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-767012 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1212 00:07:02.216434 42564 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1212 00:07:02.436493 42564 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1212 00:07:02.996087 42564 kubeadm.go:319] [certs] Generating "sa" key and public key
I1212 00:07:02.996368 42564 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1212 00:07:03.197956 42564 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1212 00:07:03.458438 42564 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1212 00:07:03.661265 42564 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1212 00:07:03.790618 42564 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1212 00:07:04.202621 42564 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1212 00:07:04.203482 42564 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1212 00:07:04.207624 42564 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1212 00:07:04.211147 42564 out.go:252] - Booting up control plane ...
I1212 00:07:04.211246 42564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1212 00:07:04.211327 42564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1212 00:07:04.212009 42564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1212 00:07:04.242961 42564 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1212 00:07:04.243110 42564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1212 00:07:04.250760 42564 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1212 00:07:04.251035 42564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1212 00:07:04.251218 42564 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1212 00:07:04.387520 42564 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1212 00:07:04.387634 42564 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1212 00:11:04.388521 42564 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001247859s
I1212 00:11:04.388544 42564 kubeadm.go:319]
I1212 00:11:04.388606 42564 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1212 00:11:04.388660 42564 kubeadm.go:319] - The kubelet is not running
I1212 00:11:04.388770 42564 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1212 00:11:04.388777 42564 kubeadm.go:319]
I1212 00:11:04.388882 42564 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1212 00:11:04.388913 42564 kubeadm.go:319] - 'systemctl status kubelet'
I1212 00:11:04.388943 42564 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1212 00:11:04.388958 42564 kubeadm.go:319]
I1212 00:11:04.393990 42564 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1212 00:11:04.394393 42564 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1212 00:11:04.394494 42564 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1212 00:11:04.394713 42564 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1212 00:11:04.394717 42564 kubeadm.go:319]
I1212 00:11:04.394780 42564 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
W1212 00:11:04.394878 42564 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [functional-767012 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [functional-767012 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001247859s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
I1212 00:11:04.394973 42564 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
I1212 00:11:04.817043 42564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1212 00:11:04.830276 42564 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
I1212 00:11:04.830330 42564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1212 00:11:04.838157 42564 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1212 00:11:04.838165 42564 kubeadm.go:158] found existing configuration files:
I1212 00:11:04.838213 42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1212 00:11:04.846139 42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1212 00:11:04.846194 42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1212 00:11:04.853619 42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1212 00:11:04.861776 42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1212 00:11:04.861839 42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1212 00:11:04.869470 42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1212 00:11:04.877266 42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1212 00:11:04.877321 42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1212 00:11:04.884704 42564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1212 00:11:04.892277 42564 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1212 00:11:04.892335 42564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1212 00:11:04.900247 42564 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1212 00:11:04.941168 42564 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
I1212 00:11:04.941217 42564 kubeadm.go:319] [preflight] Running pre-flight checks
I1212 00:11:05.024404 42564 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
I1212 00:11:05.024470 42564 kubeadm.go:319] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
I1212 00:11:05.024505 42564 kubeadm.go:319] [0;37mOS[0m: [0;32mLinux[0m
I1212 00:11:05.024548 42564 kubeadm.go:319] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1212 00:11:05.024595 42564 kubeadm.go:319] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1212 00:11:05.024642 42564 kubeadm.go:319] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1212 00:11:05.024688 42564 kubeadm.go:319] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1212 00:11:05.024735 42564 kubeadm.go:319] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1212 00:11:05.024782 42564 kubeadm.go:319] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1212 00:11:05.024826 42564 kubeadm.go:319] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1212 00:11:05.024873 42564 kubeadm.go:319] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1212 00:11:05.024918 42564 kubeadm.go:319] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1212 00:11:05.098371 42564 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1212 00:11:05.098488 42564 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1212 00:11:05.098583 42564 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1212 00:11:05.107418 42564 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1212 00:11:05.112606 42564 out.go:252] - Generating certificates and keys ...
I1212 00:11:05.112694 42564 kubeadm.go:319] [certs] Using existing ca certificate authority
I1212 00:11:05.112764 42564 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1212 00:11:05.112847 42564 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
I1212 00:11:05.112919 42564 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
I1212 00:11:05.112998 42564 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
I1212 00:11:05.113067 42564 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
I1212 00:11:05.113137 42564 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
I1212 00:11:05.113203 42564 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
I1212 00:11:05.113287 42564 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
I1212 00:11:05.113367 42564 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
I1212 00:11:05.113415 42564 kubeadm.go:319] [certs] Using the existing "sa" key
I1212 00:11:05.113477 42564 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1212 00:11:05.267574 42564 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1212 00:11:05.388677 42564 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1212 00:11:05.446578 42564 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1212 00:11:05.627991 42564 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1212 00:11:05.985910 42564 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1212 00:11:05.986540 42564 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1212 00:11:05.989374 42564 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1212 00:11:05.992461 42564 out.go:252] - Booting up control plane ...
I1212 00:11:05.992578 42564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1212 00:11:05.992669 42564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1212 00:11:05.993416 42564 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1212 00:11:06.019060 42564 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1212 00:11:06.019163 42564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1212 00:11:06.028166 42564 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1212 00:11:06.029163 42564 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1212 00:11:06.030602 42564 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1212 00:11:06.184076 42564 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1212 00:11:06.184190 42564 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1212 00:15:06.183993 42564 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000289392s
I1212 00:15:06.184020 42564 kubeadm.go:319]
I1212 00:15:06.184074 42564 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
I1212 00:15:06.184105 42564 kubeadm.go:319] - The kubelet is not running
I1212 00:15:06.184203 42564 kubeadm.go:319] - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I1212 00:15:06.184206 42564 kubeadm.go:319]
I1212 00:15:06.184329 42564 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
I1212 00:15:06.184371 42564 kubeadm.go:319] - 'systemctl status kubelet'
I1212 00:15:06.184402 42564 kubeadm.go:319] - 'journalctl -xeu kubelet'
I1212 00:15:06.184406 42564 kubeadm.go:319]
I1212 00:15:06.188762 42564 kubeadm.go:319] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
I1212 00:15:06.189222 42564 kubeadm.go:319] [WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
I1212 00:15:06.189347 42564 kubeadm.go:319] [WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1212 00:15:06.189587 42564 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
I1212 00:15:06.189592 42564 kubeadm.go:319]
I1212 00:15:06.189666 42564 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
I1212 00:15:06.189706 42564 kubeadm.go:403] duration metric: took 8m5.625385304s to StartCluster
I1212 00:15:06.189737 42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1212 00:15:06.189801 42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1212 00:15:06.218090 42564 cri.go:89] found id: ""
I1212 00:15:06.218103 42564 logs.go:282] 0 containers: []
W1212 00:15:06.218110 42564 logs.go:284] No container was found matching "kube-apiserver"
I1212 00:15:06.218115 42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1212 00:15:06.218176 42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1212 00:15:06.245709 42564 cri.go:89] found id: ""
I1212 00:15:06.245723 42564 logs.go:282] 0 containers: []
W1212 00:15:06.245730 42564 logs.go:284] No container was found matching "etcd"
I1212 00:15:06.245734 42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1212 00:15:06.245810 42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1212 00:15:06.270125 42564 cri.go:89] found id: ""
I1212 00:15:06.270138 42564 logs.go:282] 0 containers: []
W1212 00:15:06.270144 42564 logs.go:284] No container was found matching "coredns"
I1212 00:15:06.270149 42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1212 00:15:06.270208 42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1212 00:15:06.293928 42564 cri.go:89] found id: ""
I1212 00:15:06.293941 42564 logs.go:282] 0 containers: []
W1212 00:15:06.293948 42564 logs.go:284] No container was found matching "kube-scheduler"
I1212 00:15:06.293953 42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1212 00:15:06.294011 42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1212 00:15:06.319402 42564 cri.go:89] found id: ""
I1212 00:15:06.319415 42564 logs.go:282] 0 containers: []
W1212 00:15:06.319423 42564 logs.go:284] No container was found matching "kube-proxy"
I1212 00:15:06.319428 42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1212 00:15:06.319489 42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1212 00:15:06.343928 42564 cri.go:89] found id: ""
I1212 00:15:06.343942 42564 logs.go:282] 0 containers: []
W1212 00:15:06.343948 42564 logs.go:284] No container was found matching "kube-controller-manager"
I1212 00:15:06.343956 42564 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1212 00:15:06.344020 42564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1212 00:15:06.369782 42564 cri.go:89] found id: ""
I1212 00:15:06.369796 42564 logs.go:282] 0 containers: []
W1212 00:15:06.369803 42564 logs.go:284] No container was found matching "kindnet"
I1212 00:15:06.369811 42564 logs.go:123] Gathering logs for describe nodes ...
I1212 00:15:06.369825 42564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1212 00:15:06.436703 42564 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1212 00:15:06.428587 4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 00:15:06.429257 4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 00:15:06.430360 4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 00:15:06.430878 4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 00:15:06.432332 4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
output:
** stderr **
E1212 00:15:06.428587 4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 00:15:06.429257 4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 00:15:06.430360 4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 00:15:06.430878 4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 00:15:06.432332 4768 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
** /stderr **
I1212 00:15:06.436714 42564 logs.go:123] Gathering logs for containerd ...
I1212 00:15:06.436726 42564 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1212 00:15:06.474894 42564 logs.go:123] Gathering logs for container status ...
I1212 00:15:06.474913 42564 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1212 00:15:06.504172 42564 logs.go:123] Gathering logs for kubelet ...
I1212 00:15:06.504187 42564 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1212 00:15:06.563885 42564 logs.go:123] Gathering logs for dmesg ...
I1212 00:15:06.563905 42564 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
W1212 00:15:06.579505 42564 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000289392s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1212 00:15:06.579538 42564 out.go:285] *
W1212 00:15:06.579600 42564 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000289392s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1212 00:15:06.579611 42564 out.go:285] *
W1212 00:15:06.581746 42564 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1212 00:15:06.588757 42564 out.go:203]
W1212 00:15:06.592347 42564 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.35.0-beta.0
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
[0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1084-aws[0m
[0;37mOS[0m: [0;32mLinux[0m
[0;37mCGROUPS_CPU[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
[0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
[0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
[0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
[0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
[0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
[0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
[0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.000289392s
Unfortunately, an error has occurred, likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
stderr:
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher
W1212 00:15:06.592387 42564 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
W1212 00:15:06.592406 42564 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
I1212 00:15:06.595444 42564 out.go:203]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
==> containerd <==
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522476018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522548962Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522635609Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522710514Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522769009Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522829711Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522884440Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.522942590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.523041323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.523135421Z" level=info msg="Connect containerd service"
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.523513541Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.524145628Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.537965981Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.538182508Z" level=info msg="Start subscribing containerd event"
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.538339974Z" level=info msg="Start recovering state"
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.538280330Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577805643Z" level=info msg="Start event monitor"
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577855096Z" level=info msg="Start cni network conf syncer for default"
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577866526Z" level=info msg="Start streaming server"
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577884396Z" level=info msg="Registered namespace \"k8s.io\" with NRI"
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577895350Z" level=info msg="runtime interface starting up..."
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577903810Z" level=info msg="starting plugins..."
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.577919958Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Dec 12 00:06:58 functional-767012 systemd[1]: Started containerd.service - containerd container runtime.
Dec 12 00:06:58 functional-767012 containerd[767]: time="2025-12-12T00:06:58.579450564Z" level=info msg="containerd successfully booted in 0.081603s"
==> describe nodes <==
command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
E1212 00:15:07.572975 4898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 00:15:07.573740 4898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 00:15:07.575493 4898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 00:15:07.576098 4898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
E1212 00:15:07.577663 4898 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
The connection to the server localhost:8441 was refused - did you specify the right host or port?
==> dmesg <==
[Dec11 23:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014465] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.504479] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.038126] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.726220] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +5.947343] kauditd_printk_skb: 36 callbacks suppressed
==> kernel <==
00:15:07 up 57 min, 0 user, load average: 0.21, 0.46, 0.69
Linux functional-767012 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
==> kubelet <==
Dec 12 00:15:04 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 12 00:15:05 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
Dec 12 00:15:05 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 00:15:05 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 00:15:05 functional-767012 kubelet[4698]: E1212 00:15:05.084005 4698 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 12 00:15:05 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 12 00:15:05 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 12 00:15:05 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
Dec 12 00:15:05 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 00:15:05 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 00:15:05 functional-767012 kubelet[4704]: E1212 00:15:05.833868 4704 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 12 00:15:05 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 12 00:15:05 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 12 00:15:06 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
Dec 12 00:15:06 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 00:15:06 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 00:15:06 functional-767012 kubelet[4788]: E1212 00:15:06.620191 4788 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 12 00:15:06 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 12 00:15:06 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 12 00:15:07 functional-767012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
Dec 12 00:15:07 functional-767012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 00:15:07 functional-767012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Dec 12 00:15:07 functional-767012 kubelet[4833]: E1212 00:15:07.351174 4833 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
Dec 12 00:15:07 functional-767012 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 12 00:15:07 functional-767012 systemd[1]: kubelet.service: Failed with result 'exit-code'.
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-767012 -n functional-767012: exit status 6 (397.417215ms)
-- stdout --
Stopped
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1212 00:15:08.129246 48264 status.go:458] kubeconfig endpoint: get endpoint: "functional-767012" does not appear in /home/jenkins/minikube-integration/22101-2343/kubeconfig
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-767012" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (500.99s)